Monday, April 25, 2005

DBCC Showcontig - Explained

The output of the DBCC SHOWCONTIG statement is as follows:
DBCC SHOWCONTIG scanning 't1' table...
Table: 't1' (541244983); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned................................: 25
- Extents Scanned..............................: 5
- Extent Switches..............................: 10
- Avg. Pages per Extent........................: 5.0
- Scan Density [Best Count:Actual Count].......: 36.36% [4:11]
- Logical Scan Fragmentation ..................: 48.00%
- Extent Scan Fragmentation ...................: 40.00%
- Avg. Bytes Free per Page.....................: 4862.4
- Avg. Page Density (full).....................: 39.93%
DBCC execution completed. If DBCC printed error messages, contact your system
This output represents the following:
Pages Scanned : This is the number of pages in the index or the table. In the preceding example, four rows can fit on one page, since the average row size, with three INT s and one CHAR(2000), is 2012 (= 4 + 4 + 4 + 2,000) bytes. As there are 40 rows, then ideally the number of pages should be 10 (= 40 / 4). A high value of 25 indicates a large amount of internal fragmentation.

Extents Scanned: This is the number of extents for the index or the table. In the preceding example there are 25 pages. Therefore, the minimum number of extents should be 4 (= 25 / 8, rounded to the higher integer value). A value of 5 indicates that there is some, but not a significant amount of, fragmentation within the extents.

Extent Switches: This is the number of switches required between the extents to access pages in the logical order of the index. In an ideal situation, if all the pages of an index (or table) are laid out in the same order as that of the index,
then Extent Switches = Extents Scanned – 1.
In the preceding example, the ideal number of Extent Switches = (Extents Scanned − 1) = (5 − 1) = 4. A value greater than 4 indicates that the pages in the extents are not in the same order as the logical order of the index. The current high value of 10 is a sign of a large amount of external fragmentation.

Avg. Pages per Extent:
The average number of pages in an extent is = Pages Scanned/Extents Scanned.
For a large table with minimal external fragmentation, Avg. Pages per Extent should be close to 8. For large tables, anything less than 8 indicates external fragmentation.

For small tables, however, Avg. Pages per Extent can be less than 8. For example, a small table with only two pages will have Avg. Pages per Extent as 2, which should not be considered as an external fragmentation since the small number in this case is not due to fragmented content, but a small amount of content.

Scan Density [Best Count:Actual Count]: This is the ratio of the Best Count of extents to the Actual Counts, and it is one of the most useful indicators of fragmentation.
The Best Count represents the number of extents required for the number of Pages Scanned. It is equal to Pages Scanned / 8, rounded to the higher integer number. For example, if Pages Scanned is 9, then Best Count is 9 / 8, which rounded to the higher integer number is 2.
The Actual Count is an indicator of how many extents can ideally cause the current number of Extent Switches. As mentioned in the preceding description of the Extent Switches, Actual Count = Extent Switches + 1.
In the best case, the Actual Count should be equal to the Best Count. Therefore, ideally, Scan Density should be 100%. A value less than 100% indicates that the pages are non-contiguously distributed between the extents. An index with a Scan Density of less than 40% can be considered a candidate for defragmentation.
In the preceding example
Best Count = Pages Scanned / 8, rounded to the higher integer value
= 25 / 8, rounded to the higher integer value
= 4
and Actual Count = Extent Switches + 1 = 10 + 1 = 11
Therefore Scan Density = 4:11 = 36.36%
A low value of 36.36% indicates a high amount of external fragmentation.

Logical Scan Fragmentation: This shows the ratio of pages with a different physical order from the logical order of the index. A range between 0% and 10% is considered an acceptable value.
In the preceding example, the high value of 48.00% indicates that 12 out of 25 pages are out of order, which is a sign of high external fragmentation.

Extent Scan Fragmentation: This indicates the gaps between the extents. Ideally, all the extents used by an index should be side by side on the disk. If a gap exists between two adjacent extents, then the next extent can't be retrieved by the read-ahead operation on the disk. The value of Extent Scan Fragmentation represents the ratio of the total number of gaps between extents to the total number of extents. For instance, if two adjacent extents aren't side by side, then it represents one gap. For a large table using uniform extents, a high Extent Scan Fragmentation value is a sign of high external fragmentation. Ideally, the value of Extent Scan Fragmentation should be 0%, indicating that there are no gaps between the extents.
In the preceding example, a value of 40.00% indicates that out of the five extents, two have gaps from their adjacent extents. This is a sign of high external fragmentation.

Avg. Bytes Free per Page: This is the average number of free bytes in a page. A high value may be due to a large amount of internal fragmentation. This value can also be high if you intentionally maintain lots of free space per page, using the fill factor to reduce page splits caused by INSERT and UPDATE queries. You'll learn about the fill factor later in the chapter.
In the preceding example, a high value of 4,862.4 bytes indicates that, on average, more than half of the pages are empty. This is a sign of high internal fragmentation.

Avg. Page Density (full): This is the inverse of Avg. Bytes Free per Page expressed as percentage. A high percentage value indicates that a greater number of rows have been compressed in the pages.
In the preceding example, the low value of 39.93% represents the same information as represented by the high value of Avg. Bytes Free per Page. It is an indication of a high amount of internal fragmentation.

For a large table, DBCC SHOWCONTIG may take a long time to provide the detailed fragmentation report. To get a quick report on the fragmentation status, you can execute the DBCC SHOWCONTIG statement with the FAST option:DBCC SHOWCONTIG(t1, i1) WITH FAST

This provides a short report:
DBCC SHOWCONTIG scanning 't1' table...
Table: 't1' (541244983); index ID: 1, database ID: 7
TABLE level scan performed.
- Pages Scanned................................: 25
- Extent Switches..............................: 10
- Scan Density [Best Count:Actual Count].......: 36.36% [4:11]
- Logical Scan Fragmentation ..................: 48.00%
DBCC execution completed. If DBCC printed error messages, contact your system
You can use this report to decide whether or not a detailed fragmentation analysis is required.

Internal Fragementation:
Number of leaf (or data) pages = Pages Scanned = 10
Average free space in a page = Avg. Bytes Free per Page = 12.0 bytes
Amount of compression in a page = Avg. Page Density (full) = 99.85%

External fragmentation:
A minimum of two extents is required to hold the ten pages. For a minimum of external fragmentation, there should not be any gap between the two extents, and all pages should be physically arranged in the logical order of the index. You can observe these aspects in the preceding output:

1.Number of extents = Extents Scanned = 2.

2. Amount of gap between the extents = Extent Scan Fragmentation = 0.00%.
In case of a nonzero value, determine the number of gaps between the extents. For example, 50% would indicate that one gap exists between the two extents. Compare this number of gaps with the original number of gaps. A lower value is always better.

3. Number of switches between the two extents required to follow the logical order of the pages = Extent Switches = 1.

4. Contiguous distribution of pages across extents = Scan Density [Best Count:Actual Count] = 100.00%.

5. Number of out-of-order pages = Logical Scan Fragmentation = 0.00%. As mentioned previously, a value between 0% and 10% is considered acceptable

Wednesday, April 20, 2005

TSQL Enhancements on SQL 2005

SQL Server 2005 as "A secure, scalable, Enterprise-class Data Management platform for Windows Server systems"

The features introduced in SQL Server 2005 are aimed at the following categories:
(1) Developer Productivity (.net CLR , Server Brocker, Notification Services)

(2) Business Intelligence
(3) Enterprise Data Management

When to use ,net CLR and when to use TSQL?
Use T-SQL when you are performing set based operations like joins etc use CLR based code when you want to do CPU intensive functions (like complex calculations, iterations etc). Also, if need to take advantage of some framework class features, CLR functions are the best choice.
T-SQL and CLR functions can be interleaved Ex: SELECT dbo.myCLRFunction(), dbo.myTSQLFunction() FROM mytable

CLR procedure call:
Let's say you created a CLR procedure. The first step is to register the procedure in a SQL Server database. When you do this,
SQL Server stores the assembly in binary content in the "sys.assemblies" table and other related tables. Once the assembly has been registered, you will then need to wrap it around a normal SQL procedure or function. The CREATE statements have a new
EXTERNAL NAME clause that lets you do that.Once that happens, when you invoke the procedure and SQL finds out that is CLR based, the CLR runtime is invoked.

TSQL Enhancements on SQL 2005

(1) New Data Types
a) XML, First class data type. Using that variablem table column, parameters can be created.
XQuery: New one, which can be used to manipulate the XML Content.
b) Varchar(max), nvarchar(max), varbinary(max) supports upto 2 GB of data. same character manipulation function for both small and larger data.

(2) MARS (Multiple Active Result Sets) support MARS will allow you to fetch multiple results sets using a single connection. Earlier, you would get an error asking you
to close the active connection.
(3) Native XML support
(4) New Isolation Levels
(5) Exception Management Now Exception handling is more structured like any other .NET language.Its more robust when compared to the conventional @@ERROR statements. More often in TSQL alternate statements used to be @@Error. Welcome to the world of Try - Catch Blocks. Any error that is raised in the TRY block is caught in the CATCH block.
There are some system functions that help you take necessary actions like.,
Error_number – Returns the Error Number encountered Error_Text – Returns the Error Text encountered Error_Severity – Returns the severity number encountered Error_State – Returns the error state number encountered TRY CATCH can be nested to any levels and error raised will be caught at the nearest CATCH block.
Read more on this at:
(6) Recursive Queries
(7) New T-SQL functions for ranking and transforming data Rank : provides rank for each row with given partition, SELECT RANK() OVER (PARTITION BY Gender ORDER BY TotalPercent DESC) AS [Rank] FROM dbo.StudentMarks
- One caveat to note in the RANK function is: If there are two rows with the same rank, the next higher rank will skip by 1
NTitleThe NTILE function can be used to distribute the rows in an ordered partition into a specified number of groups
--- ROW_NUMBER() would return the running number for each rows
(8) DDL Triggers DDL triggers can fire in response to a Transact-SQL event processed in the current database, or on the current server.
The scope of the trigger depends on the event. For example, a DDL trigger created to fire in response to a CREATE TABLE event will do so whenever a CREATE TABLE event occurs in the database. A DDL trigger created to fire in response to a CREATE
LOGIN event will do so on whenever a CREATE LOGIN event occurs in the server.
DDL triggers can be created to fire in response to : One or more particular DDL statements A pre-defined group of DDL statements.A DDL Trigger can fire after execution of any Transact-SQL event belonging to a pre-defined grouping of similar events.For example, if you want a DDL trigger to fire after any CREATE TABLE, ALTER TABLE, or DROP TABLE statement executes, you
can specify FOR DDL_TABLE_EVENTS in the CREATE TRIGGER statement.
Refer to this link for more information:

APPLY operator : The table-valued function acts as the right input and the outer table expression acts as the left input
Indexes:No major changes on Indexes and Cursors.
Index operations can now be done online; users can still access the table data and use other indexes on the table while one index is being created, altered or dropped

The MAXDOP clause can now be specified on index data definition language, thus controlling the number of parallel operations used by that specific statement.

Indexes in SQL Server also support XML columns, which is a big improvement

The new ALLOW_ROW_LOCKS and ALLOW_PAGE_LOCKS options can e used to control the level at which locking occurs for the index.

Disaster Recovery Reporting scripts

Have you ever been asked a seemingly simple question about a SQL Server configuration, and realize that you do not remember (or do not know) the answer? Some information is critical to rebuilding your SQL Server machine in the event of a disaster. I’ve compiled seven questions, along with how to address them using Transact-SQL queries. You can utilize these scripts wherever you see fit (within custom reporting, stored procedures).
Some qualifications… First, these scripts are to be used with SQL Server 2000 (not 7.0 or 6.5). Also, these scripts are sprinkled with system table and undocumented stored procedure calls. I know this makes me a bad person – but I’ll gladly re-write them for future versions. In the meantime, I just want answers to critical questions. Lastly, these scripts assume you have sysadmin server role permissions.

Question 1: How was your SQL Server instance installed? What service packs have been installed?
Answer:This script will list server name, product level, version, collation, edition, instance name (if it was a named instance), whether the instance was a virtual server (clustered), if full text indexing was installed (not necessarily used), the security type, and the license type. One caveat about the “license” type… This sometimes displays a “Disabled” value. This bug is referenced in the following Microsoft Knowledge Base article:;en-us;291332.

SELECT SERVERPROPERTY('ServerName') as 'Server Name',
SERVERPROPERTY('ProductLevel') as 'Product Level',
SERVERPROPERTY('ProductVersion') as 'Product Version',
SERVERPROPERTY('collation') as 'Server Collation',
SERVERPROPERTY('edition') as 'SQL Server Edition',
SERVERPROPERTY('InstanceName') as 'Instance Name',
END as 'Is it clustered?',
END as 'Full text engine installed?',
CASE SERVERPROPERTY('IsIntegratedSecurityOnly')
WHEN 0 THEN 'Mixed'
WHEN 1 THEN 'Integrated security only'
END as 'Security mode',
SERVERPROPERTY('LicenseType') as 'License Type'

Question 2: How is SQL Server configured? For example, is parallelism enabled? Is AWE enabled?
Answer:Run this script to show advanced options (and then disable the viewing of advanced options when finished):
EXEC sp_configure 'show advanced options', 1
EXEC sp_configure
EXEC sp_configure 'show advanced options', 0

Question 3: What are the databases on your SQL Server instance, and what is their status?
Answer:Use sp_helpdb to list databases on the SQL Server instance. Check out the status column to see what database options are configured.
EXEC master..sp_helpdb

Question 4: What was the installation path where you installed SQL Server?
Answer:Use xp_regread to view the installation path of SQL Server from the registry.

---For Default Instance
EXEC master..xp_regread 'HKEY_LOCAL_MACHINE', 'SOFTWARE\Microsoft\MSSQLServer\Setup', 'SQLPath

'--- For Named Instance
EXEC master..xp_regread 'HKEY_LOCAL_MACHINE', 'SOFTWARE\Microsoft\Microsoft SQL Server\\Setup', 'SQLPath'

Question 5: What are the drive letters on your SQL Server machine, and how much space is left?
Answer:Use xp_fixeddrives to view drive letters and their remaining MBs free. EXEC master..xp_fixeddrives

Question 6: Where are all the database files located? Answer:Always a fun question – especially when you find little surprises about bad file placement.
Use this script to identify file locations for all databases on the SQL Server Instance

[DatabaseNM] sysname NOT NULL,
[name] [nchar] (128) NOT NULL ,
[fileid] [smallint] NOT NULL ,
[filename] [nchar] (260) NOT NULL ,
[filegroup] [nvarchar] (128) NULL ,
[size] [nvarchar] (18) NULL ,
[maxsize] [nvarchar] (18) NULL ,
[growth] [nvarchar] (18) NULL ,
[usage] [varchar] (9) NOT NULL
EXEC master..sp_MSForeachdb 'USE ?
SELECT ''?'', name,
CONVERT(nvarchar(15), size * 8) + N'' KB'',
CASE maxsize WHEN -1 THEN N''Unlimited''
CONVERT(nvarchar(15), maxsize * 8) + N'' KB'' end,
CASE status & 0x100000 WHEN 0x100000 THEN
CONVERT(nvarchar(3), growth) + N''%''
CONVERT(nvarchar(15), growth * 8) + N'' KB'' end,
CASE status & 0x40 WHEN 0x40 THEN ''log only'' else ''data only'' end
FROM sysfiles
ORDER BY fileid'
FROM #Files

/*---- Status Flage ---
0x100000 = Growth is in percentage, not pages.
0x2 = Disk file.0x40 = Log device

Question 7: Where are the backup files located?Answer:Assuming you are backing up your databases – in a pinch – you will want to know where the backup files exist.

This script shows the location of the latest full database backup for each database being backed up.

SELECT A.database_name as 'DBName',
A.backup_finish_date as 'Backup Finished',
B.physical_device_name as 'Backup Filename'
FROM msdb.dbo.backupset A,
msdb.dbo.backupmediafamily B,
(SELECT database_name,
MAX(backup_finish_date) as 'maxfinishdate'
FROM msdb.dbo.backupset
WHERE Type = 'D' --- D for Full Disk Backup, 'L' for Log Backup
GROUP BY database_name) C
WHERE A.media_set_id = B.media_set_id AND
A.backup_finish_date = C.maxfinishdate AND
A.type = 'D'

Interview Preparation: SQL Server 2000 Security "Best Practices".

The article in the series provides you with a list of SQL Server 2000 Security "Best Practices".
Change the sysadmin password!

This is an absolutely necessary. Use a strong password. Keep the password secure.
Install the latest security patches and service packs
With severe virus breakouts now commonplace, it is crucial to keep your SQL Server Instance patched with the latest service packs and security patches.
Sign up for the Microsoft Security Notification Service
To be notified by Microsoft when new security patches are released, sign up for the notification service here.
When possible, use integrated security
This is a tough policy for most businesses to adopt. 3rd party applications continue to use of SQL Logins in their products. When possible, however, do use a integrated security (Windows Authentication Logins/Groups). It is significantly more secure than using SQL Logins (which can be used by anonymous sources).
When possible, use Windows Groups over individual Windows users
In addition to using Windows Authentication, consider using Windows Groups to assign SQL Server permissions instead of individual Windows users. Assigning permissions to Windows Groups allows you to consolidate management of permissions and ensure consistent security access.
Use the rule of "least permissions"
Set up logins with only the permissions they need. Avoid adding logins to server or database roles that provide more access than they need.
Grant permissions at the database role level
Avoid granting individual object permissions to specific database users. Instead, assign them to a database or user-defined role, and then make the database user a member of that role.
Do not allow dynamic SQL!
Minimize potential SQL Injection attacks and performance issues by restricting dynamic SQL within your Transact-SQL code. Rewrite calls to EXEC or sp_executesql with static SQL instead.
Use stored procedures
Don't embed Transact-SQL DML or DDL statements within your web pages. Use stored procedure calls instead. When calling stored procedures, make sure your application is validating (for SQL Injection) all parameters passed.

Evaluate firewall configurations
Keep the SQL Server machine behind a firewall. Based on your network architecture, evaluate whether or not specific TCP/IP ports should be blocked (TCP port 1433, UDP port 1434, or other ports for named instances). Use the Server Network Utility to determine the ports being used.
Disable cross-database ownership chaining
Cross-database ownership chaining (occurs when a source object depends on objects in other databases), when enabled, can create security access for unintended database users. Beginning with SP3, you now have the ability to turn this behavior off both at the server and the database level.

Restrict EXEC access to xp_cmdshell
Xp_cmdshell access in the wrong hands can cause tremendous harm to your server. If SQL Server is running under a Domain Windows Account, other servers could be impacted as well.
Minimize the SQL Server and SQL Server Agent Service Account permissions
When possible, use the system account option for the SQL Server and SQL Server Agent Service Accounts. If that is not possible, choose an account with the least amount of necessary permissions.
Minimize the permissions of the non-sysadmin proxy account
If you must configure this proxy account, remember that it's permissions are now opened up to non-sysadmin users. Minimize the permissions of this account.
Schedule clean-up time, Remove inactive users
Remove logins, database users, and linked server connections which are no longer in-use. Remove logins for "departed" employees or clients immediately.
Monitor your server role permission
Security has a way of "changing" over time, particularly if you have multiple employees with sysadmin or administrator permissions to your SQL Server Instance. At least weekly, check membership to server roles. You can do this by running the following Transact-SQL script:

SELECT 'ServerRole' =,
'MemberName' =
FROM master.dbo.spt_values A
INNER JOIN master.dbo.sysxlogins B ON
A.number & B.xstatus = A.number
WHERE A.low = 0 AND
A.type = 'SRV'

Check the error log for login failures
You can check the SQL Log in Enterprise Manager, or via Transact-SQL for all login failures to your database:

-- Script that populates errors into
-- a temporary table.

(errorlog varchar(255), continuationrow int)

INSERT #errors
(errorlog, continuationrow)
EXEC xp_readerrorlog

SELECT errorlog
FROM #errors WHERE errorlog LIKE '%Login failed%'

DROP TABLE #errors

Login failure logging must be turned on. To do so, in Enterprise Manager, right click the server you wish to enable and select Properties. On the Security tab, select 'Failure'. You must reboot for this new setting to take effect.
Lock down file directory permissions and physical access to the SQL Server machine
Evaluate the security of file/administrative shares and directories. Remove innappropriate permissions. In addition to NTFS security, make sure the actual SQL Server machine is kept in a secure area. The screen should be locked (requiring a valid Windows login) at the terminal. Terminal Services access should also be restricted to the necessary staff.

Remove any sensitive file data from the SQL Server machine, including log files generated from the SQL Server 2000 installation (sqlstp.log, sqlsp.log, and setup.iss under :\Program Files\Microsoft SQL Server\MSSQL\Install) and :\Program Files\Microsoft SQL Server\ MSSQL$\Install folder for named instances.
Download and use Microsoft Baseline Security Analyzer V1.2
A free tool that looks for issues on your SQL Server machine. This tool reviews security concerns outside of SQL Server too. You can download it here.
Download and use Microsoft's Best Practices Analyzer Tool
This free tool isn't just limited to security concerns, but also evaluates server settings, database settings, and database object Transact-SQL. Download it here.

More on Josepsh Sack's book on SQL Server 2000 Fast Answers.

SQL Server 2000 Frequently Used DBCC Commands

This article will present a quick review of DBCC commands commonly used by SQL Server DBAs and Developers. During an interview, It is unlikely you will be asked to relay specific syntax for these DBCC commands, however you may be asked to demonstrate the proper context in which you may use them.

DBCC CHECKALLOC checks page usage and allocation in the database. Use this command if allocation errors are found for the database. If you run DBCC CHECKDB, you do not need to run DBCC CHECKALLOC, as DBCC CHECKDB includes the same checks (and more) that DBCC CHECKALLOC performs.

Courtesy : By Joseph Sack

This command checks for consistency in and between system tables. This command is not executed within the DBCC CHECKDB command, so running this command weekly is recommended.

DBCC CHECKCONSTRAINTS alerts you to any CHECK or constraint violations. Use it if you suspect that there are rows in your tables that do not meet the constraint or CHECK constraint rules.

A very important DBCC command, DBCC CHECKDB should run on your SQL Server instance on at least a weekly basis. Although each release of SQL Server reduces occurrences of integrity or allocation errors, they still do happen. DBCC CHECKDB includes the same checks as DBCC CHECKALLOC and DBCC CHECKTABLE. DBCC CHECKDB can be rough on concurrency, so be sure to run it at off-peak times.

DBCC CHECKTABLE is almost identical to DBCC CHECKDB, except that it is performed at the table level, not the database level. DBCC CHECKTABLE verifies index and data page links, index sort order, page pointers, index pointers, data page integrity, and page offsets. DBCC CHECKTABLE uses schema locks by default, but can use the TABLOCK option to acquire a shared table lock. CHECKTABLE also performs object checking using parallelism by default (if on a multi-CPU system).

DBCC CHECKFILEGROUP works just like DBCC CHECKDB, only DBCC CHECKFILEGROUP checks the specified filegroup for allocation and structural issues. If you have a very large database (this term is relative, and higher end systems may be more apt at performing well with multi-GB or TB systems ) , running DBCC CHECKDB may be time-prohibitive. If your database is divided into user defined filegroups, DBCC CHECKFILEGROUP will allow you to isolate your integrity checks, as well as stagger them over time.

DBCC CHECKIDENT returns the current identity value for the specified table, and allows you to correct the identity value if necessary.

If your database allows modifications and has indexes, you should rebuild your indexes on a regular basis. The frequency of your index rebuilds depends on the level of database activity, and how quickly your database and indexes become fragmented. DBCC DBREINDEX allows you to rebuild one or all indexes for a table. Like DBCC CHECKDB, DBCC CHECKTABLE, DBCC CHECKALLOC, running DBREINDEX during peak activity times can significantly reduce concurrency.

Microsoft introduced the excellent DBCC INDEXDEFRAG statement beginning with SQL Server 2000. This DBCC command, unlike DBCC DBREINDEX, does not hold long term locks on indexes. Use DBCC INDEXDEFRAG for indexes that are not very fragmented, otherwise the time this operation takes will be far longer then running DBCC DBREINDEX. In spite of it's ability to run during peak periods, DBCC INDEXDEFRAG has had limited effectiveness compared to DBCC DBREINDEX (or drop/create index).

The DBCC INPUTBUFFER command is used to view the last statement sent by the client connection to SQL Server. When calling this DBCC command, you designate the SPID to examine. (SPID is the process ID, which you can get from viewing current activity in Enterprise Manager or executing sp_who. )

DBCC OPENTRAN is a Transact-SQL command that is used to view the oldest running transaction for the selected database. The DBCC command is very useful for troubleshooting orphaned connections (connections still open on the database but disconnected from the application or client), and identification of transactions missing a COMMIT or ROLLBACK. This command also returns the oldest distributed and undistributed replicated transactions, if any exist within the database. If there are no active transactions, no data will be returned. If you are having issues with your transaction log not truncating inactive portions, DBCC OPENTRAN can show if an open transaction may be causing it.

You may not use this too frequently, however it is an interesting DBCC command to execute periodically, particularly when you suspect you have memory issues. DBCC PROCCACHE provides information about the size and usage of the SQL Server procedure cache.

The DBCC SHOWCONTIG command reveals the level of fragmentation for a specific table and its indices. This DBCC command is critical to determining if your table or index has internal or external fragmentation. Internal fragmentation concerns how full an 8K page is. When a page is underutilized, more I/O operations may be necessary to fulfill a query request than if the page was full, or almost full. External fragmentation concerns how contiguous the extents are. There are eight 8K pages per extent, making each extent 64K. Several extents can make up the data of a table or index. If the extents are not physically close to each other, and are not in order, performance could diminish.

DBCC SHRINKDATABASE shrinks the data and log files in your database. Avoid executing this command during busy periods in production, as it has a negative impact on I/O and user concurrency. Also remember that you cannot shrink a database past the target percentage specified, shrink smaller than the model database, shrink a file past the original file creation size, or shrink a file size used in an ALTER DATABASE statement.

DBCC SHRINKFILE allows you to shrink the size of individual data and log files. (Use sp_helpfile to gather database file ids and sizes).

Trace flags are used within SQL Server to temporarily enable or disable specific SQL Server instance characteristics. Traces are enabled using the DBCC TRACEON command, and disabled using DBCC TRACEOFF. DBCC TRACESTATUS is used to displays the status of trace flags. You'll most often see TRACEON used in conjunction with deadlock logging (providing more verbose error information).

Execute DBCC USEROPTIONS to see what user options are in effect for your specific user connection. This can be helpful if you are trying to determine if you current user options are inconsistent with the database options.

Joseph Sack

Monday, April 18, 2005



--Should be large

EXEC master..xp_getfiledetails 'c:\shrink_test_data.mdf'

Backup database to disk 'NUL'

Backup database to disk 'NUL'

--Make sure the tlog file doesn't auto-truncate


Processed 10128 pages for database 'shrink_test', file 'shrink_test_data' on file 1.Processed 1 pages for database 'shrink_test', file 'shrink_test_log' on file 1.BACKUP DATABASE successfully processed 10129 pages in 28.287 seconds (2.933 MB/sec).

Database backed up: Database: shrink_test, creation date(time): 2005/04/15(08:13:12), pages dumped: 16603, first LSN: 302:422:1, last LSN: 302:567:1, number of dump devices: 1, device information: (FILE=1, TYPE=DISK: {'d:\sqldata\MSSQL\BACKUP\NUL'}).

How to read database logfile

DBCC LOGINFO('dbname')

- To view the VLF (Virtual Log File) layout.
- SQL Server truncates the log files in unit of VLF.- On result "status=2" denotes the VLF is in use.

Tuesday, April 12, 2005

What does it mean well maintained database?

A well-maintained databases, means those that have

1. A well-planned backup and restore strategy

2. An effective log monitoring,

3. Databases for which capacity planning and testing have defined an appropriately
sized transaction log

4. Operations that clear the transaction log—other than normal log backups --
— should never occur.

How to avoid some one truncating the transaction log of a database

Just set DBCC Traceon(3231)

More on avoiding break on the continuity of database transaction log

Friday, April 08, 2005

Disaster Recovery on SQL Server

Disaster Recovery on SQL Server

The different type of disaster on SQL Server

1. Database Corruption

2. Long Recovery

3. Suspect Database

4. Hardware related Disaster

5. SQL Server does not start / response

1. Corruption
This kind of disaster commonly occur due to allocation errors with pages that actually stores the data within SQL Server. It can be avoided by proactive steps like 'Database Maintenance plan in place. Ensuring DBCC Check DB being run against the databases at regular interval.

It being recommended to run the these at lease weekly on medium sized database and fortnightly on huge database. This will definitely highlight the allocation or corruption issues with the databases if there any before the cause harm. Hence corrective action could be taken at ample time ahead.

Continues on..

More on;en-us;307775