Copyrights © 2012 Jatin Kotadiya. All Rights Reserved . Powered by Blogger.

Wednesday, October 24, 2012

Sql Server


Que : Explain Different Edition of SQL Server 2000 ?

Editions of SQL Server 2000
SQL Server 2000 is available in different editions to accommodate the unique performance,run-time, and price requirements of different organizations and individuals.

SQL Server 2000 Enterprise Edition. This edition is the complete SQL Server offering for any organization. The Enterprise Edition offers the advanced scalability and reliability features that are necessary for mission-critical line-ofbusiness and Internet scenarios, including Distributed Partitioned Views, log shipping, and enhanced failover clustering. This edition also takes full advantage of the highest-end hardware, with support for up to 32 CPUs and 64 GB of RAM. In addition, the SQL Server 2000 Enterprise Edition includes advanced analysis features.

SQL Server 2000 Standard Edition. This edition is the affordable option for small- and medium-sized organizations that do not require advanced scalability and availability features or all of the more advanced analysis features of the SQL Server 2000 Enterprise Edition. You can use the Standard Edition on symmetric multi-processing systems with up to four CPUs and 2 GB of RAM.

SQL Server 2000 Personal Edition. This edition includes a full set of management tools and most of the functionality of the Standard Edition, but it is optimized for personal use. In addition to running on Microsoft’s server operating systems, the Personal Edition runs on non-server operating systems, including Windows 2000 Professional, Windows NT Workstation 4.0, and Windows 98. Dual-processor systems are also supported. While this edition supports databases of any size, its performance is optimized for single users and small workgroups and degrades with workloads generated by more than five concurrent users.

SQL Server 2000 Developer Edition. This SQL Server offering enables developers to build any type of application on top of SQL Server. This edition includes all of the functionality of the Enterprise Edition but with a special development and test end-user license agreement (EULA) that prohibits production deployment.

SQL Server 2000 Desktop Engine (MSDE). This edition has the basic database engine features of SQL Server 2000. This edition does not include a user interface, management tools, analysis capabilities, merge replication support, client access licenses, developer libraries, or Books Online. This edition also limits the database size and user workload. Desktop Engine has the smallest footprint of any edition of SQL Server 2000 and is thus an ideal embedded or offline data store.

SQL Server 2000 Windows CE Edition. This edition is the version of SQL Server 2000 for devices and appliances running Windows CE. This edition is programmatically compatible with the other editions of SQL Server 2000, so developers can leverage their existing skills and applications to extend the power of a relational data store to solutions running on new classes of devices.






Que : Explain Enterprise  manager of SQL Server 2000 ?

SQL Server Enterprise Manager
SQL Server Enterprise Manager is the primary administrative tool for SQL Server 2000 and provides a Microsoft Management Console (MMC)–compliant user interface that helps you to perform a variety of administrative tasks:

■ Defining groups of servers running SQL Server
■ Registering individual servers in a group
■ Configuring all SQL Server options for each registered server
■ Creating and administering all SQL Server databases, objects, logins, users, and permissions in each registered server
■ Defining and executing all SQL Server administrative tasks on each registered server
■ Designing and testing SQL statements, batches, and scripts interactively by invoking SQL Query Analyzer
■ Invoking the various wizards defined for SQL Server MMC is a tool that presents a common interface for managing different server applications in a Microsoft Windows network. Server applications include a component called a snap-in that presents MMC users with a user interface for managing the server application. SQL Server Enterprise Manager is the Microsoft SQL Server 2000 MMC snap-in.

Que : Explain Database Architecture of SQL Server 2000 ?

Database Architecture
SQL Server 2000 data is stored in databases. The data in a database is organized into the logical components that are visible to users, while the database itself is physically implemented as two or more files on disk.

When using a database, you work primarily with the logical components (such as
tables, views, procedures, and users). The physical implementation of files is largely transparent. Typically, only the database administrator needs to work with the physical implementation. Figure 1.2 illustrates the difference between the user view and the physical implementation of a database.

Each instance of SQL Server has four system databases (master, tempdb, msdb, and model) and one or more user databases. Some organizations have only one user database that contains all of their data; some organizations have different databases for each group in their organization. They might also have a database used by a single application. For example, an organization could have one database for sales,one for payroll, one for a document management application, and so on. Some applications use only one database; other applications might access several databases. Figure 1.3 shows the SQL Server system databases and several user databases.

You do not need to run multiple copies of the SQL Server database engine in order for multiple users to access the databases on a server. An instance of the SQL Server Standard Edition or Enterprise Edition is capable of handling thousands of users who are working in multiple databases at the same time. Each instance of SQL Server makes all databases in the instance available to all users who connect to the instance (subject to the defined security permissions)





If you connect to an instance of SQL Server, your connection is associated with a particular database on the server. This database is called the current database. You are usually connected to a database defined as your default database by the system administrator, although you can use connection options in the database APIs to specify another database. You can switch from one database to another by using either the Transact-SQL USE <database_name> statement or by using an API function that changes your current database context

SQL Server 2000 enables you to detach a database from an instance of SQL Server, then reattach it to another instance or even attach the database back to the same instance. If you have a SQL Server database file, you can tell SQL Server when you connect to attach that database file using a specific database name.


Que : Explain Database Physical Architecture of SQL?


Physical Database Architecture
This section describes the way in which SQL Server 2000 files and databases are organized. The organization of SQL Server 2000 and SQL Server 7.0 is different  from the organization of data in SQL Server 6.5 or earlier.

Pages and Extents
The fundamental unit of data storage in SQL Server is the page. In SQL Server 2000, the page size is 8 kilobytes (KB). In other words, SQL Server 2000 databases contain 128 pages per megabyte (MB).

The start of each page is a 96-byte header used to store system information, such as the type of page, the amount of free space on the page, and the object ID of the
object owning the page.

Data pages contain all of the data in data rows (except text, ntext, and image data, which are stored in separate pages). Data rows are placed serially on the page (starting immediately after the header). A row offset table starts at the end of the page. The row offset table contains one entry for each row on the page, and each entry records how far the first byte of the row is from the start of the page. The entries in the row offset table are in reverse sequence from the sequence of the rows on the page, as shown in Figure 1.4.

Extents are the basic unit in which space is allocated to tables and indexes. An extent is eight contiguous pages, or 64 KB. In other words, SQL Server 2000 databases
have 16 extents per megabyte.


Que : Explain Transact-SQL Statements ?

Transact-SQL Statements
A Transact-SQL statement is a set of code that performs some action on database objects or on data in a database. SQL Server supports three types of Transact-SQL statements: DDL, DCL, and DML.

Data Definition Language
Data definition language, which is usually part of a database management system, is used to define and manage all attributes and properties of a database, including row layouts, column definitions, key columns, file locations, and storage strategy.

A DDL statement supports the definition or declaration of database objects such as databases, tables, and views. The Transact-SQL DDL used to manage objects is based on SQL-92 DDL statements (with extensions). For each object class, there are usually CREATE, ALTER, and DROP statements (for example, CREATE TABLE, ALTER TABLE, and DROP TABLE).

Most DDL statements take the following form:
CREATE object_name
ALTER object_name
DROP object_name
The following three examples illustrate how to use the Transact-SQL CREATE keyword to create, alter, and drop tables. CREATE is not limited only to table objects, however.

CREATE TABLE
The CREATE TABLE statement creates a table in an existing database. The following statement will create a table named Importers in the Northwind database. The table will include three columns: CompanyID, CompanyName, and Contact.

USE Northwind
CREATE TABLE Importers
(
CompanyID int NOT NULL,
CompanyName varchar(40) NOT NULL,
Contact varchar(40) NOT NULL
)

ALTER TABLE
The ALTER TABLE statement enables you to modify a table definition by altering, adding, or dropping columns and constraints or by disabling or enabling constraints and triggers. The following statement will alter the Importers table in the Northwind database by adding a column named ContactTitle to the table.

USE Northwind
ALTER TABLE Importers
ADD ContactTitle varchar(20) NULL

DROP TABLE
The DROP TABLE statement removes a table definition and all data, indexes, triggers, constraints, and permission specifications for that table. Any view or stored procedure that references the dropped table must be explicitly dropped by using the DROP VIEW or DROP PROCEDURE statement. The following statement drops the Importers table from the Northwind database.

USE Northwind
DROP TABLE Importers

Data Control Language
Data control language is used to control permissions on database objects. Permissions are controlled by using the SQL-92 GRANT and REVOKE statements and the Transact-SQL DENY statement.

GRANT
The GRANT statement creates an entry in the security system that enables a user in the current database to work with data in that database or to execute specific Transact-SQL statements. The following statement grants the Public role SELECT permission on the Customers table in the Northwind database:

USE Northwind
GRANT SELECT
ON Customers
TO PUBLIC



REVOKE
The REVOKE statement removes a previously granted or denied permission from a user in the current database. The following statement revokes the SELECT permission
from the Public role for the Customers table in the Northwind database:
USE Northwind
REVOKE SELECT
ON Customers
TO PUBLIC

DENY
The DENY statement creates an entry in the security system that denies a permission from a security account in the current database and prevents the security account from inheriting the permission through its group or role memberships.

USE Northwind
DENY SELECT
ON Customers
TO PUBLIC

Data Manipulation Language
Data manipulation language is used to select, insert, update, and delete data in the objects defined with DDL.

SELECT
The SELECT statement retrieves rows from the database and enables the selection of one or many rows or columns from one or many tables. The following statement retrieves the CustomerID, CompanyName, and ContactName data for companies who have a CustomerID value equal to alfki or anatr. The result set is ordered according to the ContactName value:

USE Northwind
SELECT CustomerID, CompanyName, ContactName
FROM Customers
WHERE (CustomerID = ’alfki’ OR CustomerID = ’anatr’)
ORDER BY ContactName

INSERT
An INSERT statement adds a new row to a table or a view. The following statement adds a row to the Territories table in the Northwind database. The TerritoryID value for the new row is 98101; the TerritoryDescription value is Seattle; and the  RegionID value is 2.

USE Northwind
INSERT INTO Territories
VALUES (98101, ’Seattle’, 2)
Note The INTO keyword is an optional keyword that can be used between INSERT and the target table. Use the INTO keyword for code clarity.



UPDATE
The UPDATE statement changes data in a table. The following statement updates the row in the Territories table (in the Northwind database) whose TerritoryID value is 98101. The TerritoryDescription value will be changed to Downtown Seattle.

USE Northwind
UPDATE Territories
SET TerritoryDescription = ’Downtown Seattle’
WHERE TerritoryID = 98101

DELETE
The DELETE statement removes rows from a table. The following statement removes the row from the Territories table (from the Northwind database) whose TerritoryID value is 98101.

USE Northwind
DELETE FROM Territories
WHERE TerritoryID = 98101


Que : Explain Transact-SQL Statements ?

Processing a SELECT Statement
The steps used to process a single SELECT statement referencing only local base tables (no views or remote tables) illustrate the basic process of executing most Transact-SQL statements. SQL Server uses the following steps to process a single SELECT statement:

1. The parser scans the SELECT statement and breaks it into logical units, such as keywords, expressions, operators, and identifiers.

2. A query tree, sometimes called a sequence tree, is built by describing the logical steps needed to transform the source data into the format needed by the result set.

3. The query optimizer analyzes all of the ways in which the source tables can be accessed and selects the series of steps that will return the result fastest while consuming the fewest resources. The query tree is updated to record this exact series of steps, and the final, optimized version of the query tree is called the execution plan.

4. The relational engine begins executing the execution plan. As steps that need data from the base tables are processed, the relational engine uses OLE DB to request the storage engine to pass up data from the row sets that are requested from the relational engine.

5. The relational engine processes the data returned from the storage engine into the format defined for the result set and returns the result set to the client.





Que : Explain Entity Ralationship with Suitable Example ?

Entity Relationships
In a relational database, relationships help to prevent redundant data. A relationship works by matching data in key columns—usually columns that have the same name in both tables. In most cases, the relationship matches the primary key from one table, which provides a unique identifier for each row with an entry in the foreign key in the other table. Primary keys and foreign keys are discussed in more detail in Chapter 5, “Implementing Data Integrity.” There are three types of relationships between tables: one-to-one, one-to-many, and  many-to-many. The type of relationship depends on how the related columns are defined.

One-to-One Relationships
In a one-to-one relationship, a row in table A can have no more than one matching row in table B (and vice versa). A one-to-one relationship is created if both of the related columns are primary keys or have unique constraints. This type of relationship is not common, however, because information related in this way would usually be in one table.

One-to-Many Relationships
A one-to-many relationship is the most common type of relationship. In this type of relationship, a row in table A can have many matching rows in table B, but a row in table B can have only one matching row in table A. For example, the Publishers and Titles tables mentioned previously have a one-to-many relationship. Each publisher produces many titles, but each title comes from only one publisher. A one-tomany relationship is created if only one of the related columns is a primary key or has a unique constraint.

Many-to-Many Relationships
In a many-to-many relationship, a row in table A can have many matching rows in table B (and vice versa). You create such a relationship by defining a third table, called a junction table, whose primary key consists of the foreign keys from both table A and table B. In Figures 3-6 and 3-7, you saw how the author information could be separated into another table. The Books table and the Authors table have a many-to-many relationship. Each of these tables has a one-to-many relationship with the BookAuthor table, which serves as the junction table between the two primary tables


Que : What is Data Type ? Explain System Define, User Define,
  Column level Data Type ?

A data type is an attribute that specifies what type of data can be stored in a column, parameter, or variable. SQL Server provides a set of system-supplied data types. In addition, you can create user-defined data types that are based on the system-supplied data types. This lesson describes system-supplied data types and user-defined data types, and it explains how to identify which data type you should use when defining a column.

System-Supplied Data Types
In SQL Server, each column has a related data type, which is an attribute that specifies
the type of data (integer, character, monetary, and so on) that the object can hold. Certain objects other than columns also have an associated data type. The following
objects have data types:

■ Columns in tables and views
■ Parameters in stored procedures
■ Variables
■ Transact-SQL functions that return one or more data values of a specific data type
■ Stored procedures that have a return code (which always has an integer data type)
Assigning a data type to each column is one of the first steps to take toward designing a table. SQL Server supplies a set of system data types that define all of the types of data that you can use with SQL Server. You can use data types to enforce data integrity, because the data that is entered or changed must conform to the type specified in the original CREATE TABLE statement. For example, you cannot store someone’s last name in a column defined with the datetime data type because a datetime column accepts only valid dates.
Assigning a data type to an object defines four attributes of the object:

The kind of data contained by the object. For example, the data might be character, integer, or binary.

The length of the stored value or its size. The lengths of image, binary, and varbinary data types are defined in bytes. The length of any of the numeric data types is the number of bytes required to hold the number of digits allowed for that data type. The lengths of character string and Unicode data types are defined in characters.

The precision of the number (numeric data types only). The precision is the number of digits that the number can contain. For example, a smallint object can hold a maximum of five digits; therefore, it has a precision of five.

The scale of the number (numeric data types only). The scale is the number of digits that can be stored to the right of the decimal point. For example, an int object cannot accept a decimal point and has a scale of zero. A money object can have a maximum of four digits to the right of the decimal point and has a scale of four. The following table provides descriptions of the categories of data types that SQL Server supports and descriptions of the base data types that each category contains:


All data stored in SQL Server must be compatible with one of these base data types.The cursor data type is the only base data type that cannot be assigned to a table column. You can use this type only for variables and stored procedure parameters

Several base data types have synonyms (for example, rowversion is a synonym for timestamp, and national character varying is a synonym for nvarchar).

User-Defined Data Types
User-defined data types are based on the system data types in SQL Server 2000.  User-defined data types can be used when several tables must store the same type of data in a column and you must ensure that these columns have exactly the same data type, length, and nullability. For example, a user-defined data type called postal_code could be created based on the char data type. When you create a user-defined data type, you must supply the following parameters:
■ Name
■ System data type upon which the new data type is based
■ Nullability (whether the data type allows null values) When nullability is not explicitly defined, it will be assigned based on the ANSI null default setting for the database or connection.

Note If a user-defined data type is created in the Model database, it exists in all new user-defined databases. If the data type is created in a user-defined database, however, the data type exists only in that user-defined database. You can create a user-defined data type by using the sp_addtype system stored procedure  or by using SQL Server Enterprise Manager.

Column Data Types
In this exercise, you will identify the data types that you should use in your column definitions when you create the tables for the database that you created in Exercise

1. The tables and columns will be based on the objects and data constraints that you identified when you developed your database design. You will use system-supplied base data types for your database, rather than user-defined data types. Each column must have a data type. To perform this exercise, you will need paper and a pencil to write down the data type for each column.

To review existing tables and columns and their data types
1. Open SQL Server Enterprise Manager.

2. Expand the console tree until you can view the list of objects in the Northwind database.

3. Click the Tables node listed beneath the Northwind node. A list of tables in the Northwind database appears in the right pane.

4. Right-click the Employees table, then click Properties. The Table Properties - Employees dialog box appears.

5. Review the list of columns and their data types. Notice that the size of each column
is listed to the right of the data type.

6. Close the Table Properties - Employees dialog box.

7. Right-click the Orders table, then click Properties. The Table Properties - Orders dialog box appears.

8. Review the list of columns and their data types. Close the Table Properties - Orders dialog box.

9. Open the properties for several other tables, and review the columns and data types.

To identify the data types for the Authors table
1. Make a list of each column in the Authors table.
2. Refer to the data constraints that you identified for the Authors table when you developed your database design. Which data constraints apply to the AuthorID column of the Authors table? At this point, you are concerned only with identifying the data type for the AuthorID column and determining what type of data that column will contain. In this case, you want SQL Server to generate this ID automatically, which means that when you define this column, you will need to include the IDENTITY property in the definition. The IDENTITY property can be used only with an integer or decimal data type. You will learn more about defining this type of column in the next lesson. You decide to use an integer data type rather than decimal, because decimal is unnecessary as an ID. You also decide that the smallint data type is adequate to use to identify authors. The smallint data type supports an ID of up to 32,767—many more authors than you anticipate the database ever needing to store.
3. Write down smallint next to the AuthorID column.

4. Review the database design and the data constraints for the FirstName and LastName columns. What type of data will you store in this column? Because a name can vary in length but will not likely exceed 30 characters, you decide to use the varchar(30) data type for each column.

5. Review the database design and the data constraints for the YearBorn and YearDied columns. You can assume that each column will contain only four characters. Because date and time data types do not include a year-only data type, you decide to use
a character data type. Which data type should you use for the YearBorn and YearDied columns?

6. Review the database design and the data constraints for the Description column.
What type of data will you store in this column?  Because the description can vary in length but will not likely exceed 200 characters, you decide to use the varchar(200) data type for each column.

7. Be sure to write down the name of the correct data type next to the name of each column in the Authors table.

To identify the column data types for tables in the BookShopDB database
1. Write down the name of each table in your database design.

2. Review the database design and the data constraints for each column in the tables.

3. Identify the data type for each column. What is the data type for each column in the BookShopDB tables?

4. Be certain to write down the data type next to the name of each column (or at least record this information in some way). You will need this information for later exercises.



Que : Explain Integrity Constant ?

Introduction to Integrity Constraints
Constraints enable you to define the way SQL Server 2000 automatically enforces the integrity of a database. Constraints define rules regarding the values allowed in columns and are the standard mechanisms for enforcing integrity. Using constraints is preferred to using triggers, rules, or defaults. The query optimizer also uses constraint definitions to build high-performance query execution plans. Constraints can be column constraints or table constraints:

■ A column constraint is specified as part of a column definition and applies only
to that column.

■ A table constraint is declared independently from a column definition and can apply to more than one column in a table. Table constraints must be used when more than one column is included in a constraint. For example, if a table has two or more columns in the primary key, you must use a table constraint to include both columns in the primary key. Consider a table that records events happening in a computer in a factory. Assume that events of several types can happen at the same time, but no two events happening at the same time can be of the same type. This rule can be enforced in the table by including both the type and time columns in a two-column primary key, as shown in the
following

CREATE TABLE statement:
CREATE TABLE FactoryProcess
(
EventType INT,
EventTime DATETIME,
EventSite CHAR(50),
EventDesc CHAR(1024),
CONSTRAINT event_key PRIMARY KEY (EventType, EventTime)
)
SQL Server supports four main classes of constraints: PRIMARY KEY constraints,
UNIQUE constraints, FOREIGN KEY constraints, and CHECK constraints.

PRIMARY KEY Constraints
A table usually has a column (or combination of columns) whose values uniquely identify each row in the table. This column (or columns) is called the primary key of the table and enforces the entity integrity of the table. You can create a primary key by defining a PRIMARY KEY constraint when you create or alter a table. A table can have only one PRIMARY KEY constraint, and a column that participates in the PRIMARY KEY constraint cannot accept null values. Because PRIMARY KEY constraints ensure unique data, they are often defined for identity columns. When you specify a PRIMARY KEY constraint for a table, SQL Server 2000 enforces data uniqueness by creating a unique index for the primary key columns This index also permits fast access to data when the primary key is used in queries. If a PRIMARY KEY constraint is defined for more than one column, values can be duplicated within one column—but each combination of values from all of the columns in the PRIMARY KEY constraint definition must be unique. Figure 5.2 illustrates how the au_id and title_id columns in the TitleAuthor table form a composite PRIMARY KEY constraint, which ensures that the combination of au_id and title_id is unique.

Creating PRIMARY KEY Constraints
You can create a PRIMARY KEY constraint by using one of the following methods:
■ Creating the constraint when the table is created (as part of the table definition)
■ Adding the constraint to an existing table, provided that no other PRIMARY KEY constraint already exists You can modify or delete a PRIMARY CONSTRAINT once it has been created. For example, you might want the PRIMARY KEY constraint of the table to






reference other columns, or you might want to change the column order, index name, clustered option, or fill factor of the PRIMARY KEY constraint. You cannot change the length of a column defined with a PRIMARY KEY constraint.

Note To modify a PRIMARY KEY constraint by using Transact-SQL, you must first delete the existing PRIMARY KEY constraint and then re-create it with the new definition.

The following CREATE TABLE statement creates the Table1 table and defines the
Col1 column as the primary key:
CREATE TABLE Table1
(
Col1 INT PRIMARY KEY,
Col2 VARCHAR(30)
)
You can also define the same constraint by using a table-level PRIMARY KEY
constraint:
CREATE TABLE Table1
(
Col1 INT,
Col2 VARCHAR(30),
CONSTRAINT table_pk PRIMARY KEY (Col1)
)
You can use the ALTER TABLE statement to add a PRIMARY KEY constraint to
an existing table:

ALTER TABLE Table1
ADD CONSTRAINT table_pk PRIMARY KEY (Col1)
When a PRIMARY KEY constraint is added to an existing column (or columns) in the table, SQL Server 2000 checks the existing data in the columns to ensure that it follows the rules for primary keys
■ No null values
■ No duplicate values
If a PRIMARY KEY constraint is added to a column that has duplicate or null values, SQL Server returns an error and does not add the constraint. You cannot add a PRIMARY KEY constraint that violates these rules.

SQL Server automatically creates a unique index to enforce the uniqueness requirement of the PRIMARY KEY constraint. If a clustered index does not already exist in the table (or a non-clustered index is not explicitly specified), a  unique, clustered index is created to enforce the PRIMARY KEY constraint.

Important A PRIMARY KEY constraint cannot be deleted if it is referenced by a FOREIGN KEY constraint in another table. The FOREIGN KEY constraint must be deleted first. FOREIGN KEY constraints are discussed later in this lesson.

UNIQUE Constraints
You can use UNIQUE constraints to ensure that no duplicate values are entered in specific columns that do not participate in a primary key. Although both a UNIQUE constraint and a PRIMARY KEY constraint enforce uniqueness, you should use a UNIQUE constraint instead of a PRIMARY KEY constraint in the following situations:

If a column (or combination of columns) is not the primary key. Multiple
UNIQUE constraints can be defined on a table, whereas only one PRIMARY
KEY constraint can be defined on a table.

If a column allows null values. UNIQUE constraints can be defined for columns
that allow null values, whereas PRIMARY KEY constraints can be defined only on columns that do not allow null values.
A UNIQUE constraint can also be referenced by a FOREIGN KEY constraint.

Creating UNIQUE Constraints
You can create a UNIQUE constraint in the same way that you create a PRIMARY KEY constraint:
■ By creating the constraint when the table is created (as part of the table definition)
■ By adding the constraint to an existing table, provided that the column or combination of columns comprising the UNIQUE constraint contains only unique or NULL values. A table can contain multiple UNIQUE constraints. You can use the same Transact-SQL statements to create a UNIQUE constraint that you used to create a PRIMARY KEY constraint. Simply replace the words PRIMARY KEY with the word UNIQUE. As with PRIMARY KEY constraints, a UNIQUE constraint can be modified or deleted once it has been created.

When a UNIQUE constraint is added to an existing column (or columns) in the table, SQL Server 2000 (by default) checks the existing data in the columns to ensure that all values, except null, are unique. If a UNIQUE constraint is added to a column that has duplicated values, SQL Server returns an error and does not add the constraint.

SQL Server automatically creates a UNIQUE index to enforce the uniqueness requirement of the UNIQUE constraint. Therefore, if an attempt is made to insert a duplicate row, SQL Server returns an error message saying that the UNIQUE constraint
has been violated and does not add the row to the table. Unless a clustered index is explicitly specified, a unique, non-clustered index is created by default to enforce the UNIQUE constraint.

FOREIGN KEY Constraints
A foreign key is a column or combination of columns used to establish and enforce a link between the data in two tables. Create a link between two tables by adding a column (or columns) to one of the tables and defining those columns with a FOREIGN KEY constraint. The columns will hold the primary key values from the second table. A table can contain multiple FOREIGN KEY constraints.

For example, the Titles table in the Pubs database has a link to the Publishers table because there is a logical relationship between books and publishers. The pub_id column in the Titles table matches the primary key column in the Publishers table, as shown in Figure 5.3. The pub_id column in the Titles table is the foreign key to the Publishers table.

You can create a foreign key by defining a FOREIGN KEY constraint when you create or alter a table. In addition to a PRIMARY KEY constraint, a FOREIGN KEY constraint can reference the columns of a UNIQUE constraint in another table.

A FOREIGN KEY constraint can contain null values; however, if any column of a composite FOREIGN KEY constraint contains null values, then verification of the FOREIGN KEY constraint will be skipped.

Note A FOREIGN KEY constraint can reference columns in tables in the same database or within the same table (self-referencing tables).

Although the primary purpose of a FOREIGN KEY constraint is to control the data that can be stored in the foreign key table, it also controls changes to data in the primary key table. For example, if the row for a publisher is deleted from the Publishers table and the publisher’s ID is used for books in the Titles table, the relational integrity between the two tables is broken. The deleted publisher’s books are orphaned in the titles table without a link to the data in the Publishers table. A FOREIGN KEY constraint prevents this situation. The constraint enforces referential integrity by ensuring that changes cannot be made to data in the primary key table if those changes invalidate the link to data in the foreign key table. If an attempt is made to delete the row in a primary key table or to change a primary key value, the action will fail if the deleted or changed primary key value corresponds to a value in the FOREIGN KEY constraint of another table



To successfully change or delete a row in a FOREIGN KEY constraint, you must first  either delete the foreign key data in the foreign key table or change the foreign key data in the foreign key table—thereby linking the foreign key to different primary key data.

Creating FOREIGN KEY Constraints
You can create a FOREIGN KEY constraint by using one of the following methods:
■ Creating the constraint when the table is created (as part of the table definition)
■ Adding the constraint to an existing table, provided that the FOREIGN KEY constraint is linked to an existing PRIMARY KEY constraint or a UNIQUE constraint in another (or the same) table You can modify or delete a FOREIGN KEY constraint once it has been created. For example, you might want the table’s FOREIGN KEY constraint to reference
other columns. You cannot change the length of a column defined with a FOREIGN KEY constraint.

Note To modify a FOREIGN KEY constraint by using Transact-SQL, you must
first delete the existing FOREIGN KEY constraint and then re-create it with the
new definition.

The following CREATE TABLE statement creates the Table1 table and defines the Col2 column with a FOREIGN KEY constraint that references the EmployeeID column, which is the primary key in the Employees table:

CREATE TABLE Table1
(
Col1 INT PRIMARY KEY,
Col2 INT REFERENCES Employees(EmployeeID)
)
You can also define the same constraint by using a table-level FOREIGN KEY
constraint:
CREATE TABLE Table1
(
Col1 INT PRIMARY KEY,
Col2 INT,
CONSTRAINT col2_fk FOREIGN KEY (Col2)
REFERENCES Employees (EmployeeID)
)
You can use the ALTER TABLE statement to add a FOREIGN KEY constraint to an
existing table:

ALTER TABLE Table1
ADD CONSTRAINT col2_fk FOREIGN KEY (Col2)
REFERENCES Employees (EmployeeID)

When a FOREIGN KEY constraint is added to an existing column (or columns) in the table, SQL Server 2000 (by default) checks the existing data in the columns to ensure that all values, except null values, exist in the columns of the referenced PRIMARY KEY or UNIQUE constraint. You can prevent SQL Server from checking the data in the column against the new constraint, however, and force it to add the new constraint regardless of the data in the column. This option is useful when the existing data already meets the new FOREIGN KEY constraint or when a business rule requires the constraint to be enforced only from this point forward. You should be careful when adding a constraint without checking existing data, however, because this action bypasses the controls in SQL Server that enforce the data integrity of the table.

Disabling FOREIGN KEY Constraints
You can disable existing FOREIGN KEY constraints when performing the following
actions:

Executing INSERT and UPDATE statements. Disable a FOREIGN KEY constraint during INSERT and UPDATE statements if new data will violate the constraint or if the constraint should apply only to the data already in the database. Disabling the constraint enables data in the table to be modified without being validated by the constraints

Implementing replication processing. Disable a FOREIGN KEY constraint during replication if the constraint is specific to the source database. When a table is replicated, the table definition and data are copied from the source database to a destination database. These two databases are usually (but not necessarily) on separate servers. If the FOREIGN KEY constraints are specific to the source database but are not disabled during replication, they might unnecessarily prevent new data from being entered in the destination database.

CHECK Constraints
CHECK constraints enforce domain integrity by limiting the values that are accepted by a column. They are similar to FOREIGN KEY constraints in that they control the values that are placed in a column. The difference is in how they determine which values are valid. FOREIGN KEY constraints get the list of valid values from another table, and CHECK constraints determine the valid values from a logical expression that is not based on data in another column. For example, it is possible to limit the range of values for a salary column by creating a CHECK constraint that allows only data ranging from $15,000 through $100,000. This feature prevents the entering of salaries from outside the normal salary range. You can create a CHECK constraint with any logical (Boolean) expression that returns TRUE or FALSE based on the logical operators. To allow only data that ranges from $15,000 through $100,000, the logical expression is as follows:

salary >= 15000 AND salary <= 100000
You can apply multiple CHECK constraints to a single column. The constraints are evaluated in the order in which they are created. In addition, you can apply a single CHECK constraint to multiple columns by creating it at the table level. For example, a multiple-column CHECK constraint can be used to confirm that any row with a country column value of USA also has a two-character value in the state column. This feature enables multiple conditions to be checked in one place.

Creating CHECK Constraints
You can create a CHECK constraint by using one of the following methods:
■ Creating the constraint when the table is created (as part of the table definition)
■ Adding the constraint to an existing table
You can modify or delete CHECK constraints once they have been created. For example, you can modify the expression used by the CHECK constraint on a column
in the table.

Note To modify a CHECK constraint using Transact-SQL, you must first delete
the existing CHECK constraint and then re-create it with the new definition



The following CREATE TABLE statement creates the Table1 table and defines the Col2 column with a CHECK constraint that limits the column-entered values to a range between 0 and 1000:

CREATE TABLE Table1
(
Col1 INT PRIMARY KEY,
Col2 INT
CONSTRAINT limit_amount CHECK (Col2 BETWEEN 0 AND 1000),
Col3 VARCHAR(30)
)
You can also define the same constraint by using a table-level CHECK constraint:
CREATE TABLE Table1
(
Col1 INT PRIMARY KEY,
Col2 INT,
Col3 VARCHAR(30),
CONSTRAINT limit_amount CHECK (Col2 BETWEEN 0 AND 1000)
)
You can use the ALTER TABLE statement to add a CHECK constraint to an existing
table:
ALTER TABLE Table1 ADD CONSTRAINT limit_amount CHECK (Col2 BETWEEN 0 AND 1000) When a CHECK constraint is added to an existing table, the CHECK constraint can apply either to new data only or to existing data as well. By default, the CHECK constraint applies to existing data as well as to any new data. The option of applying the constraint to new data only is useful when the existing data already meets the new CHECK constraint or when a business rule requires the constraint to be enforced only from this point forward.

For example, an old constraint might require postal codes to be limited to five digits, but a new constraint might require nine-digit postal codes. Old data with fivedigit postal codes is still valid and will coexist with new data that contains ninedigit postal codes. Therefore, only new data should be checked against the new constraint.

You should be careful when adding a constraint without checking existing data, however, because this action bypasses the controls in SQL Server 2000 that enforce the integrity rules for the table.

Disabling CHECK Constraints
You can disable existing CHECK constraints when performing the following actions

Executing INSERT and UPDATE statements. Disable a CHECK constraint during INSERT and UPDATE statements if new data will violate the constraint or if the constraint should apply only to the data already in the database. Disabling the constraint allows data in the table to be modified without being validated by the constraints.

Implementing replication processing. Disable a CHECK constraint during replication if the constraint is specific to the source database. When a table is replicated, the table definition and data are copied from the source database to a destination database. These two databases are usually (but not necessarily) on separate servers. If the CHECK constraints specific to the source database are not disabled, they might unnecessarily prevent new data from being entered into the destination database.

Que : How to Create Trigger Using Transect SQL Statement ?

Creating Triggers Using Transact-SQL
You can use the CREATE TRIGGER statement to create a trigger by using Query Analyzer or a command-prompt tool such as osql. When using CREATE TRIGGER, you must specify the trigger name, the table or view upon which the trigger is  applied, the class of trigger (INSTEAD OF or AFTER), the event or events that fire the trigger, and the task that you wish the trigger to perform. Optionally, you can specify whether the trigger should be replicated or encrypted. The WITH APPEND clause remains for backward compatibility but shouldn’t be used to create triggers for a SQL Server 2000 database.

The main clauses in a CREATE TRIGGER statement can be summarized as follows:
CREATE TABLE trigger_name
ON table_name or view_name
FOR trigger_class and trigger_type(s)
AS Transact-SQL statements
This section discusses the CREATE TABLE, ON, and FOR/AFTER/INSTEAD OF clauses in detail and provides examples of how they are used in a trigger statement.

Lesson 3 discusses the Transact-SQL statements appearing after the AS clause. For more details about trigger clauses not shown here, refer to SQL Server Books Online.

The CREATE TRIGGER Clause
Trigger creation begins with the CREATE TRIGGER clause followed by a trigger name. Triggers do not allow specifying the database name as a prefix to the object name. Therefore, select the database with the USE database_name clause and the GO keyword before creating a trigger. GO is specified because CREATE TRIGGER must be the first statement in a query batch.

Permission to create triggers defaults to the table owner. For consistency, consider creating tables, triggers, and other database objects so that dbo is the owner. For example, to create a trigger named Alerter in the BookShopDB database, you can use the following Transact-SQL code:

USE BookShopDB
GO
CREATE TRIGGER dbo.alerter
Trigger names must follow the rules for identifiers. For example, if you decide to create a trigger named Alerter for the Employees Table, you must enclose the name in brackets as shown:

CREATE TRIGGER dbo.[alerter for employees table] Administering the trigger object, such as deleting it, also requires that you follow the rules for identifiers.

The ON Clause
Triggers must be assigned to a table or view. Use the ON clause to instruct the trigger on to what table or view it should be applied. When a trigger is applied, the table or view is referred to as the trigger table or the trigger view. For consistency, specify the table or view owner after the ON clause. For example, to apply a trigger to the Employees table named Alerter (where both objects—the table and the trigger— are owned by dbo), you can use the following Transact-SQL code:

CREATE TRIGGER dbo.alerter
ON dbo.employees
A trigger is applied only to a single table or view. If you need to apply the same trigger task to another table in the database, create a trigger of a different name that contains the same business logic. Then, apply the new trigger to the other table. The default trigger class, AFTER, can be applied only to a table. The new trigger class, INSTEAD OF, can be applied to either a table or a view.

The FOR, AFTER, and INSTEAD OF Clauses
A trigger event type must be specified when the trigger is created. Valid event types include INSERT, UPDATE, and DELETE. A single trigger can be fired because of one, two, or all three of the events occurring. If you want a trigger to fire on all events, follow the FOR, AFTER, or INSTEAD OF clause with INSERT, UPDATE, and DELETE. The event types can be listed in any order. For example, to make the trigger named Alerter fire on all events, you can use the following
Transact-SQL code:

CREATE TRIGGER dbo.alerter
ON dbo.employees
FOR INSERT, UPDATE, DELETE The FOR clause is synonymous with the AFTER clause. Therefore, the previous code example creates an AFTER trigger. To create Alerter as an INSTEAD OF  trigger, you can use the following Transact-SQL code:

CREATE TRIGGER dbo.alerter
ON dbo.employees
INSTEAD OF INSERT, UPDATE, DELETE Notice that the FOR clause is replaced with INSTEAD OF.

The AS Clause
The AS clause and the Transact-SQL language following it designates the task that the trigger will perform. The following example shows how to create an Alerter trigger that sends an e-mail to a user named BarryT when an INSERT, UPDATE, or DELETE occurs on the employees table:

USE BookShopDB
GO
CREATE TRIGGER dbo.alerter
ON dbo.employees
AFTER INSERT, UPDATE, DELETE
AS
EXEC master..xp_sendmail ’BarryT’,
‘A record was just inserted, updated or deleted in the Employees table.’
GO
This example is kept simple so that you can see clearly how a task is created in a trigger. There are a number of ways to make the task more useful. For example, you could write the task so that the e-mail message details the exact change that occurred. Lesson 3 explores more complex trigger tasks.

Que : Explain Types of Index ?

Index Types
There are two types of indexes: clustered and nonclustered. Both types of indexes are structured as B-trees. A clustered index contains table records in the leaf level of the B-tree. A nonclustered index contains a bookmark to the table records in the leaf level. If a clustered index exists on a table, a nonclustered index uses it to facilitate data lookup. In most cases, you will create a clustered index on a table before you create nonclustered indexes.

Clustered Indexes
There can be only one clustered index on a table or view, because the clustered index key physically sorts the table or view. This type of index is particularly efficient for queries, because data records—also known as data pages—are stored in the leaf level of the B-tree. The sort order and storage location of a clustered index is analogous to a dictionary in that the words in a dictionary are sorted alphabetically and definitions appear next to the words.

When you create a primary key constraint in a table that does not contain a clustered index, SQL Server will use the primary key column for the clustered index key. If a clustered index already exists in a table, a nonclustered index is created on the column defined with a primary key constraint. A column defined as the PRIMARY key is a useful index because the column values are guaranteed to be unique. Unique values create smaller B-trees than redundant values and thus make more efficient lookup structures.

Note A column defined with a unique constraint creates a nonclustered index automatically.

To force the type of index to be created for a column or columns, you can specify the CLUSTERED or NONCLUSTERED clause in the CREATE TABLE, ALTER TABLE, or CREATE INDEX statements. Suppose that you create a Persons table containing the following columns: PersonID, FirstName, LastName, and Social- SecurityNumber. The PersonID column is defined as a primary key constraint, and the SocialSecurityNumber column is defined as a unique constraint. To make the SocialSecurityNumber column a clustered index and the PersonID column a nonclustered index, create the table by using the following syntax:

CREATE TABLE dbo.Persons
(
personid smallint PRIMARY KEY NONCLUSTERED,
firstname varchar(30),
lastname varchar(40),
socialsecuritynumber char(11) UNIQUE CLUSTERED
)
Indexes are not limited to constraints. You create indexes on any column or combination
of columns in a table or view. Clustered indexes enforce uniqueness internally. Therefore, if you create a nonunique, clustered index on a column that contains redundant values, SQL Server creates a unique value on the redundant columns to serve as a secondary sort key. To avoid the additional work required to maintain unique values on redundant rows, favor clustered indexes for columns defined with primary key constraints.

Nonclustered Indexes
On a table or view, you can create 250 nonclustered indexes or 249 nonclustered indexes and one clustered index. You must first create a unique clustered index on a view before you can create nonclustered indexes. This restriction does not apply to tables, however. A nonclustered index is analogous to an index in the back of a book. You can use a book’s index to locate pages that match an index entry. The database uses a nonclustered index to locate matching records in the database. If a clustered index does not exist on a table, the table is unsorted and is called a heap. A nonclustered index created on a heap contains pointers to table rows. Each entry in an index page contains a row ID (RID). The RID is a pointer to a table row in a heap, and it consists of a page number, a file number, and a slot number. If a clustered index exists on a table, the index pages of a nonclustered index contain clustered index keys rather than RIDs. An index pointer, whether it is a RID or an index key, is called a bookmark.

Que : Explain Index Characteristics ?

Index Characteristics
A number of characteristics (aside from the index type, which is clustered or nonclustered) can be applied to an index. An index can be defined as follows:

■ Unique duplicate records are not allowed
■ A composite of columns—an index key made up of multiple columns
■ With a fill factor to allow index pages to grow when necessary
■ With a pad index to change the space allocated to intermediate levels of the B-tree
■ With a sort order to specify ascending or descending index keys

Note Additional characteristics, such as file groups for index storage, can be applied to an index. Refer to CREATE INDEX in SQL Server Books Online and to Lesson 2 for more information.

Indexes are applied to one or more columns in a table or view. With some limitations, you can specify indexes on computed columns.

Que : Write Note on Transect Log Architecture ?

Transaction Log Architecture
Every SQL Server database has a transaction log that records all transactions and the database modifications made by each transaction. This record of transactions and their modifications supports three operations:

Recovery of individual transactions. If an application issues a ROLLBACK statement or if SQL Server detects an error (such as the loss of communication with a client), the log records are used to roll back any modifications made during an incomplete transaction.

Recovery of all incomplete transactions when SQL Server is started. If a server running SQL Server fails, the databases might be left in a state where some modifications were never written from the buffer cache to the data files, and there might be some modifications from incomplete transactions in the data files. When a copy of SQL Server is started, it runs a recovery of each database. Every modification recorded in the log that was not written to the data files is rolled forward. Every incomplete transaction found in the transaction log is then rolled back to ensure that the integrity of the database is preserved.

Rolling a restored database forward to the point of failure. After the loss of a database, as is possible if a hard drive fails on a server that does not have a Redundant Array of Independent Disks (RAID), you can restore the database to the point of failure. You first restore the last full or differential database backup and then restore the sequence of transaction log backups to the point of failure. As you restore each log backup, SQL Server reapplies all of the modifications recorded in the log to roll forward all of the transactions. When the last log backup is restored, SQL Server then uses the log information to roll back all transactions that were not complete at that point.

The transaction log is not implemented as a table but as a separate file or set of files in the database. The log cache is managed separately from the buffer cache for data pages, resulting in simple, fast, and robust code within the database engine. The format
of log records and pages is not constrained to follow the format of data pages. You can implement the transaction log on several files. You can also define the files to autogrow as required, which reduces the potential of running out of space in the transaction log and reduces administrative overhead. The mechanism for truncating unused parts of the log is quick and has a minimal effect on transaction throughput.

Que : What is deadlock ? How to minimize it using SQL Server ?





















Minimizing Deadlocks
Although deadlocks cannot be avoided completely, the number of deadlocks can be minimized. Minimizing deadlocks can increase transaction throughput and reduce system overhead because fewer transactions are rolled back, undoing all of the work performed by the transaction. In addition, fewer transactions are resubmitted by applications because they were rolled back when they were deadlocked.


You should adhere to the following guidelines to help minimize deadlocks:
■ Access objects in the same order.
■ Avoid user interaction during transactions.
■ Keep transactions short and in one batch.
■ Use a low isolation level.
■ Use bound connections.


Que : Explain steps for index tuning using wizard ?

Running the Index Tuning Wizard
You can start the Index Tuning wizard from Enterprise Manager, Query Analyzer,
or SQL Profiler. In Enterprise Manager, the Index Tuning wizard is a listed wizard
in the Select wizard window. In Query Analyzer, the Index Tuning wizard is an
option in the Query menu, and in SQL Profiler it’s an option in the Tools menu.
When you start the Index Tuning wizard, an introductory screen appears, as shown
in Figure 14.3.


Figure 14.3. The introductory screen of the Index Tuning wizard.

After connecting to the server, the Index Tuning wizard requires that you select a database and specify whether you want to keep the existing indexes, whether you want to create indexed views, and how thorough of an analysis should be performed. The wizard does not recommend that any indexes be dropped if the Keep All Existing Indexes checkbox is selected. Recommendations will include only new indexes. If you are running SQL Server 2000 Enterprise Edition or Developer Edition, the Index Tuning wizard can create indexes on views if the Add Indexed Views checkbox is selected. The more thorough the analysis, the more significant will be the CPU consumption while analysis is being performed. If CPU consumption is a problem, take any of the following measures:

■ Lower the level of analysis by selecting the Fast or Medium tuning modes. However, a thorough analysis can result in a greater overall improvement in performance.
■ Analyze a smaller workload and fewer tables.
■ Run the analysis against a test version of the production server. Save the results to a script and then run the script against the production server.
■ Run the wizard on a client computer, not the SQL Server.


After you select the Index Tuning wizard configuration, you must select a workload. Workload data comes from a trace file or trace table or a selection in the Query Analyzer.

The Query Analyzer selection option is available only if you start the Index Tuning wizard from the Query Analyzer. Do not include index or query hints in the workload. If you do, the Query Optimizer formulates an execution plan based on the index hints, which might prevent the selection of an ideal execution plan.

After you select the workload, you can change the default index tuning parameters, select the tables for the wizard to analyze, and then run the analysis. Following the analysis, the wizard might not make index suggestions if there isn’t enough data in the tables being sampled or if recommended indexes do not offer enough projected improvement in query performance over existing indexes.

To configure and run the Index Tuning wizard
1. Open Query Analyzer, and connect to your local server.

2. In the Editor pane of the Query window, type any character or press the space bar.
You must take this action for the Index Tuning wizard to be an available option in the Query pull-down menu.

3. Click the Query menu and then click Index Tuning wizard. The Welcome to the Index Tuning wizard screen appears.

4. Click Next. The Select Server And Database screen appears.

5. Select BookShopDB from the Database drop-down list box and click Next. The Specify Workload screen appears. Notice that the Query Analyzer radio button is selected. This option is available only when you start the Index Tuning wizard from Query Analyzer.

6. Click the My Workload File radio button. An Open window appears and Trace01.trc is listed in the folder and file pane.

7. Double-click Trace01.trc. The path and file name of the trace file appears in the Specify Workload screen.

8. Press the Advanced Options button to review the index tuning parameters and then click Cancel.

9. Press Next on the Specify Workload screen. The Select Tables to Tune screen appears.

10. Scroll down in the list and select the [dbo].[table01] checkbox.

11. Click Next. The analysis runs as the Index Tuning wizard determines the type of indexes to recommend. When the analysis completes, the Index Recommendations screen appears and two indexes are recommended. Below the index recommendations a bullet shows the estimated query performance improvement based on the sampled workload.

The Index Tuning wizard recommends a clustered index named Table011 with a key on the UniqueID column and a nonclustered index named Table012 with a key on the Col03 and LongCol02 columns. Later in the Index Tuning wizard screens, you can choose to save a script to create these indexes. You can customize the script before executing it. For example, you might want to name the indexes differently.

12. Click the Analysis button and review the various reports available from the Reports drop-down list box, then click Close.

13. In the Index Recommendations screen, click Next. The Schedule Index Update Job screen appears.

14. Click the Apply Changes checkbox and then click Next. The Completing The Index Tuning wizard screen appears.

15. Click Finish. The database is updated with the recommended changes and then a message box appears, indicating that the Index Tuning wizard has successfully completed.

16. Click OK to close the Index Tuning wizard.

17. Leave Query Analyzer open to complete the next practice.


Pages and Extents
Pages and Extents
The fundamental unit of data storage in SQL Server is the page. The disk space allocated to a data file (.mdf or .ndf) in a database is logically divided into pages numbered contiguously from 0 to n. Disk I/O operations are performed at the page level. That is, SQL Server reads or writes whole data pages.
Extents are a collection of eight physically contiguous pages and are used to efficiently manage the pages. All pages are stored in extents.
http://i.msdn.microsoft.com/Global/Images/clear.gif Pages
In SQL Server, the page size is 8 KB. This means SQL Server databases have 128 pages per megabyte. Each page begins with a 96-byte header that is used to store system information about the page. This information includes the page number, page type, the amount of free space on the page, and the allocation unit ID of the object that owns the page.
The following table shows the page types used in the data files of a SQL Server database.
Page type
Contents
Data
Data rows with all data, except text, ntext, image, nvarchar(max), varchar(max), varbinary(max), and xml data, when text in row is set to ON.
Index
Index entries.
Text/Image
Large object data types:
  • text, ntext, image, nvarchar(max), varchar(max), varbinary(max), and xml data
Variable length columns when the data row exceeds 8 KB:
  • varchar, nvarchar, varbinary, and sql_variant
Global Allocation Map, Shared Global Allocation Map
Information about whether extents are allocated.
Page Free Space
Information about page allocation and free space available on pages.
Data rows are put on the page serially, starting immediately after the header. A row offset table starts at the end of the page, and each row offset table contains one entry for each row on the page. Each entry records how far the first byte of the row is from the start of the page. The entries in the row offset table are in reverse sequence from the sequence of the rows on the page.
SQL Server data page with row offsets
Large Row Support
Rows cannot span pages in SQL Server 2005, however portions of the row may be moved off the row's page so that the row can actually be very large. The maximum amount of data and overhead that is contained in a single row on a page is 8,060 bytes (8 KB). However, this does not include the data stored in the Text/Image page type. In SQL Server 2005, this restriction is relaxed for tables that contain varchar, nvarchar, varbinary, or sql_variant columns. When the total row size of all fixed and variable columns in a table exceeds the 8,060 byte limitation, SQL Server dynamically moves one or more variable length columns to pages in the ROW_OVERFLOW_DATA allocation unit, starting with the column with the largest width. This is done whenever an insert or update operation increases the total size of the row beyond the 8060 byte limit. When a column is moved to a page in the ROW_OVERFLOW_DATA allocation unit, a 24-byte pointer on the original page in the IN_ROW_DATA allocation unit is maintained. If a subsequent operation reduces the row size, SQL Server dynamically moves the columns back to the original data page. For more information, see Row-Overflow Data Exceeding 8 KB.
http://i.msdn.microsoft.com/Global/Images/clear.gif Extents
Extents are the basic unit in which space is managed. An extent is eight physically contiguous pages, or 64 KB. This means SQL Server databases have 16 extents per megabyte.
To make its space allocation efficient, SQL Server does not allocate whole extents to tables with small amounts of data. SQL Server has two types of extents:
  • Uniform extents are owned by a single object; all eight pages in the extent can only be used by the owning object.
  • Mixed extents are shared by up to eight objects. Each of the eight pages in the extent can be owned by a different object.
A new table or index is generally allocated pages from mixed extents. When the table or index grows to the point that it has eight pages, it then switches to use uniform extents for subsequent allocations. If you create an index on an existing table that has enough rows to generate eight pages in the index, all allocations to the index are in uniform extents.
Mixed and uniform extents

 
Q-1 : Types of Backups

Hot Backup (OnLine Backup)
Whereas a cold backup takes a backup of a database in a shutdown state, a hot backup enables you to take a backup of a database that has not been shut down. This is the most tedious backup method, but it is also the most flexible. It enables you to take backups of an active database. It ensures resource availability to end users and enables the DBA and the operations staff to recover the database.
Cold backups concentrate on copying all the physical files associated with a database instance. Hot backups, on the other hand, concentrate on the tablespace level. To do a hot backup, you must place every individual tablespace into a backup mode (by using the alter tablespace command), copy the physical database files that make up the tablespace, and take the tablespace out of backup mode (by using the alter tablespace command). You can issue these commands from Oracle Server*Manager or SQL*Plus. For example,
When you place a tablespace in backup mode, the Oracle instance notes that a backup is being performed and internally compensates for it. As you know, it is impossible to make an authentic copy of a database file that is being written to. On receipt of the command to begin the backup, however, Oracle ceases to make direct changes to the database file. It uses a complex combination of rollback segments, buffers, redo logs, and archive logs to store the data until the end backup command is received and the database files are brought back in sync.
Simplifying a hot backup in this way is tantamount to classifying the USS Nimitz as a boat. The complexity of the actions taken by the Oracle RDBMS under a hot backup could consume an entire chapter and is beyond the scope of this book. What you should understand is the trade-off for taking a hot backup is increased use of rollback segments, redo logs, archive logs, and internal buffer areas within the SGA.
When you run a hot backup, you can restore the data files that compose a single tablespace and apply all the associated redo and archive logs to bring the tablespace back in sync with the database.

You must be running the database in ARCHIVELOG mode to perform a hot backup.

Cold Backup (Offline Backup)
One of the simplest backup methods, but also one of the most difficult to implement, is the cold backup. In a cold backup, the database has been totally shut down and all the physical files associated with the database are copied by means of normal operating system utilities. Because the database is not in operation, changes are not made to the physical files, and there are no conflicting integrity or consistency problems.
The difficulties in implementing this type of backup are mostly political, due largely to the amount of time required. Depending on the size of the database and the speed of the copy utility used—copies to disk are faster than copies to tape—a cold backup can take anywhere from a few minutes to several hours or even several days. Thus, a cold backup is not always an option.
Many sites supplement weekly or monthly cold backups with other backups on a nightly basis. They think that they have 24/7 operations, when in reality large windows of time are available in the evening for a cold backup. This is, of course, site-specific. It is up to the DBA to evaluate the needs of the user community versus the needs of the operations staff.

Q-2 : Types of Shutdown in Oracle

shutdown normal
When you issue a shutdown, also called a shutdown normal, from within Oracle Server*Manager, the Oracle RDBMS is very patient. In this mode, the database instance ignores further attempts to log into the database and waits for all the currently active sessions to disconnect from the database.
Using shutdown normal is not always the best option for a backup, even though it provides for the cleanest type of shutdown. If a user leaves the office with his workstation still logged in, the backup must wait until he logs out. The next morning, no one can log into the database because Oracle is still waiting to perform a shutdown, and the backup has not yet occurred.
shutdown immediate
A shutdown in the immediate mode is almost identical to a shutdown in the normal mode, with one exception: Oracle has no patience. When you issue a shutdown immediate command, Oracle immediately terminates all the database connections and performs rollback operations on all the outstanding transactions. Checkpoints and buffer flushing are done, and the database is brought down.
For backup operations, shutdown immediate works best, for it deals with users who fail to log off their workstations. Because Oracle performs all the rollback and checkpointing, the database is in a consistent, stable state when the termination occurs.
shutdown abort
The shutdown abort command should be used only as a last resort, and then only when all the other shutdown options have failed. By using the shutdown abort command, the DBA immediately terminates all the background processes that make up the Oracle database instance, but no rollback, checkpoint, or buffer flushing operations occur before the shutdown. In rare cases, this can lead to corruption of some of the data within the database.
A shutdown abort should not be used to stop the database before backup operations. If shutdown abort is required, the database should be restarted and shut down again in either immediate or normal mode to ensure a stable, consistent view of the database that is acceptable for backup operations.

Q-3 : What do you mean by Tuning ? Explain types of Tuning in detail.
When you are called on to optimize or tune a system, it is of paramount importance that you distinguish between the two levels of performance tuning: applications tuning and database tuning. They are distinct areas of expertise and are often handled by different people. The DBA should have at least an overview of the importance and functions of each type of tuning.
At the base of everything is the operating system, which drives the physical functionality—such as how to access the physical disk devices. On top of this level rests the RDBMS, which interacts with the operating system to store information physically. Applications communicate with the RDBMS to perform business tasks.

Applications Tuning

Applications tuning deals with how the various applications—forms, reports, and so on—are put together to interact with the database. Previous chapters discussed how a database is little more than a series of physical data files. Essentially, an application is nothing more than a program that issues calls to the database, which in turn are interpreted as physical reads and writes from the physical data files. Applications tuning means controlling the frequency and amount of data that the application requests from or sends to the database.

Database Tuning

Whereas applications development addresses how a task is accomplished, tuning at the database level is more of a nuts and bolts affair. Performance tuning at the applications level relies on a methodical approach to isolating potential areas to improve. Tuning at the database level, however, is more hit and miss. It concentrates on things such as enlarging database buffers and caches by increasing INIT.ORA parameters or balancing database files to achieve optimum throughput.
At the database level, there are three kinds of tuning:
  • Memory tuning
  • I/O tuning
  • Contention tuning
Each kind has a distinct set of areas that the DBA must examine. Memory tuning deals with optimizing the numerous caches, buffers, and shared pools that reside in memory and compose the core memory structures for the Oracle RDBMS. I/O tuning is concerned with maximizing the speed and efficiency with which the RDBMS accesses the physical data files that make up its basic storage units. Contention tuning seeks to resolve problems in which the database fights against itself for database resources.
There are only four basic steps involved in database tuning. They hold true for all three types of tuning:
  1. Gather information.
  2. Determine optimal changes.
  3. Implement changes.
  4. Monitor the database.
As with applications tuning, the more proactively the process is done, the more effective it is. The process is seldom effective when it is done on the fly or without the proper amount of research.
Q-4. What is Contention ? Explain Contention Issues
DBAs often ignore the physical aspects of a system. With all the logical structures that a DBA must deal with on a day-to-day basis, it is easy to forget about the physical elements that support them, such as SCSCI cards, bandwidth, or an I/O bus. Whenever you fail to consider the physical elements, contention can occur within the database.
Like spoiled children, database elements fight over resources. This is the most basic definition of contention. When contention happens, the database must wait for an event to occurs. This event—such as writing a block of data to a physical device or locking a row inside a database table—causes an appreciable slowdown in database performance. It is the responsibility of the DBA and others, such as the system administrator, to work with the database to minimize contention. When you minimize contention, the database performs at consistent, efficient speeds.

I/O Contention and Load Balancing

Contention among physical storage devices is the most common type of contention. Each disk drive has heads that travel back and forth across the magnetic medium (the disk) to read and write information. A database is made up of several physical data files, many of which reside on the same physical disk, so it is easy to see how contention can occur. If the database requests access to several data files on the same disk, the result is contention as the drive head moves across the disk to the first location and accesses the file, moves to the second location and accesses the file, and so on. Fortunately, you can minimize I/O contention.

Rollback Segment Contention

One of the features of an Oracle7 database is the ability to undo, or rollback, uncommitted changes to the database. In short, a transaction that physically changes database data—INSERT, UPDATE, or DELETE SQL statements—produces information that Oracle writes to its online rollback segments. Many DBAs fail to realize that because Oracle attempts to provide data consistency when a query is issued, SELECT statements use rollback segments when they access data. When a query is issued, if a row has been changed but not committed, the Oracle RDBMS returns information from rollback segments to provide read consistency. Rollback segments are also used when an instance is forced down or ended with an abnormal termination.
Rollback segment contention can occur whenever a transaction accesses a block within a rollback segment that another rollback segment needs. Use the following query to determine the amount of contention being experienced within the rollback segments.

Redo Log Contention

There is a buffer cache area in the SGA for redo information. This information is stored in memory and regulated through the use of two latches, or RAM-level locks. The redo allocation latch controls the allocation of space for writing redo information to the buffer. The redo copy latch is used to copy information to the buffer.
The wait latch requests wait to make a request, sleep, and then make the request again until it acquires the latch. Conversely, the immediate latch requests do not wait; instead, they continue processing.     









0 comments:

Post a Comment