Pages

Social Icons

Monday 31 December 2012

SQL-Server : Unlock tables locked by SQL

If your tables are locked in SQL SERVER using some SSIS or SSRS or some other tools.
For unlock we need to kill process id from sql server .

if your table is locked then SPID=-2 stored in syslockinfo table .

you can't delete directly this process id (SPID=-2)
use this query and kill whatever you got from this query result.

select req_transactionUOW
from master..syslockinfo
where req_spid = -2

KILL '010C2764-C765-7168-854X-5475A5D789S3'


Thx,
RS

Friday 28 December 2012

SSRS : Limitations

Below are the some limitations with SSRS


1) Page number can't access in RDL Body.
2) Cannot merge table columns.
3) SSRS preview mode does not allow you to modify any formatting on the fly like Crystal Reports does.
4) You can use Limited Html tags and you cannot use JavaScript in SSRS unlike Cognos.
5) As SSRS doesn’t allow JavaScript code, we can’t rename the Label of parameters dynamically based on the selection of some other parameters.
6) SSRS does not currently support CSS.
7) Customization of chart colors according to the company’s brand color. If you click a color (country) say ‘blue’ in a first chart (say For ex. Sales across Countries) then the child chart (state) must display the data in different shades of blue.
8) SSRS Subreport not allowed in Page Header/Footer.
9) SSRS Subreport value can not access anywhere in RDL.
10) SSRS doesn’t provide Backward Compatibility.

SSRS : Spliting Values


If we want to split the value in SSRS then we can use the in built split function as follows

For example : 

if we want to split "AB,BC,CD" in to "AB" "BC" and "CD".

Split("AB,BC,CD",",").GetValue(0)
Split("AB,BC,CD",",").GetValue(1)
Split("AB,BC,CD",",").GetValue(2)

Thx,
RS

Swipe a column value in SQL-Server


Some time we need to update one column with another in the same table or we can say swipe a value. For this we can use simple update statement as follows

UPDATE temp t
SET t.testColimn1 = t.testColimn2, 
t.testColimn2 = t.testColimn1

Thx,
RS

Thursday 20 December 2012

SSRS : client machine date and time

Hi All,

In SSRS some times we need to show the client machine date and time in the Report File we can do this using below code

=System.TimeZone.CurrentTimeZone.ToLocalTime(Globals!ExecutionTime)

Thx,
TX

SQL SERVER – 2008 – Download and Install Samples Database AdventureWorks 2008 - Using Script

Hi All,

This post is for Adventure Works Sql-Server sample database. Many time we need a sample database to perform our query and operation. AdventureWorks is the Microsoft's open source database. There are different-2 method to download and install the AdventureWorks but here i'll mention the method which i feel easy.


Below is the link to download the script and data files.


Click here : AdventureWorks 2008 OLTP Script

                                        OR
                  AdventureWorks 2008R2 OLTP Script

Click on the above link and download the full files. Extract it some where in your PC. Open the folder there you will find "instawdb.sql" file open it. 


un comment "SqlSamplesDatabasePath" and "SqlSamplesSourceDataPath" variable and change the path. 


Now in menu select Query>SQLCMD Mode.


Execute the script.


Now this AdventureWorks sample database is intalled in your computer.


Thx,

RS 

Friday 9 November 2012

Count number of tables,sps,function or views exist in Database

/* Count Number Of Tables In A Database */
SELECT COUNT(*) AS TABLE_COUNT FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE='BASE TABLE'

/* Count Number Of Views In A Database */
SELECT COUNT(*) AS VIEW_COUNT FROM INFORMATION_SCHEMA.VIEWS 

/* Count Number Of Functions In A Database */
SELECT COUNT(*) AS FUNCTION_COUNT FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE = 'FUNCTION' 

/* Count Number Of Stored Procedures In A Database */ 
SELECT  COUNT(*) AS PROCEDURE_COUNT FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE = 'PROCEDURE'

Tuesday 6 November 2012

LEFT JOIN with same table

Today i come across a situation where my friend want to show parent column value near to child calumn so he is writing a subquery there. I write the following query

FROM 

TabelA As Parent
Left JOIN
TabelA As Child
On Parent.Id = Child.ParentID

After this query he is getting the perfect result. Avoid the subquery whereever it is possible because it hits the performance badly.


After this query he is writing case statement to append the parent column value in child calumn. I have replace that case statement with followiing


SELECT

Child.Value + ' ' + ISNULL(Parent.Value,'')
FROM 
TabelA As Child
Left JOIN
TabelA As Parent
On Parent.Id = Child.ParentID

Always try to use the SQL in-built function

Saturday 27 October 2012

Add and Drop column from the table

Hi,

Sometimes we have requirement to add or drop a column in a existing Table.

We can drop column using following statement

ADD TABLE table_name
DROP COLUMN column_name;

For adding a column we can use the following statement

ADD TABLE table_name
ADD column_name datatype;

But this column adds in to the table with null value after then we need to update the same column with some default value. We can give the value to the column at the time of adding that using default value constraint with the following statement.

ADD TABLE table_name
ADD column_name datatype DEFAULT(0);

Example :

ADD TABLE Employee
ADD IsActive Bit DEFAULT(0);

Thx,
Rahul

Find the size of parent folder and Child folder using C#

Hi,

Once i went through a requirement in which i need to get the size of each and every folder. I thought if same thing i will do with the code then only folder path i need to pas and i will get the size of each and every child folder.

Below is the C-Sharp code, which gives the size of parent and child folder.

using System;
using System.IO;

class Program
{
    static void Main()
    {
        string path = string.Empty;
        try
        {
            Console.Write("Enter the path of folder : ");
            path = Console.ReadLine();
            GetDirectorySize(path);
        }
        catch (Exception e)
        {
            Console.WriteLine("Exception: " + e.Message);
        }
        finally
        {
            Console.WriteLine("Task done");
        }
        Console.ReadLine();
    }

    static void GetDirectorySize(string p)
    {
        long size = 0;
        Console.Write("Enter the file name of text file");
        Console.Write("(make sure text file should be new, otherwise you may loose your content) :");
        string name = Console.ReadLine();
        StreamWriter sw = new StreamWriter(name + ".txt");
        System.IO.DirectoryInfo dir = new DirectoryInfo(p);
                    FileSystemInfo[] filelist = dir.GetFileSystemInfos();

                    for (int i = 0; i < filelist.Length; i++)
                    {
                        long size1 = 0;
                        if ((filelist[i]).Attributes.ToString().IndexOf("Directory") == 0)
                        {
                            string newPath = (filelist[i]).FullName;
                            System.IO.DirectoryInfo dir1 = new DirectoryInfo(newPath);
                            FileSystemInfo[] filelist1 = dir.GetFileSystemInfos();
                            FileInfo[] fileInfo1;
                            fileInfo1 = dir1.GetFiles("*", SearchOption.AllDirectories);
                            for (int i1 = 0; i1 < fileInfo1.Length; i1++)
                            {
                                try
                                {
                                    size1 += fileInfo1[i1].Length;
                                }
                                catch { }
                            }
                            //Write a line of text
                            sw.WriteLine("Sub Folder : " + newPath + " : Directory size in MB : " + Math.Round((((double)size1) / (1024 * 1024)),2));
                        }
                    }

                    FileInfo[] fileInfo;
                    fileInfo = dir.GetFiles("*", SearchOption.AllDirectories);
                    for (int i = 0; i < fileInfo.Length; i++)
                    {
                        try
                        {
                            size += fileInfo[i].Length;
                        }
                        catch { }
                    }
                    sw.WriteLine("Root Folder : " + p + " : Directory size in MB : " + Math.Round((((double)size) / (1024 * 1024)),2));
                    //Close the file
                    sw.Close();
    }
}

Thursday 25 October 2012

Delete database forcefully


I am not sure how many times you might want to forcefully close all the active connections and drop a database. However, this is a very interesting question.

One option to do this is to take the database in SINGLE USER mode and then issue a DROP DATABASE command.

USE master;
GO
ALTER DATABASE dbname 
SET SINGLE_USER 
WITH ROLLBACK IMMEDIATE;
GO
DROP DATABASE dbname;

Wednesday 24 October 2012

How to avoid NOT IN in sql query


Below is the examples to demonstrate the how to avoid NOT IN in SQL query. Not In heats the performance very badly You must have noticed several instances where developers write query as given below.

SELECT t1.*
FROM Table1 t1
WHERE t1.ID NOT IN (SELECT t2.ID FROM Table2 t2)
GO

The query demonstrated above can be easily replaced by Outer JOIN. Indeed, replacing it by Outer JOIN is the best practice. The query that generates the same result as above is shown here using Outer JOIN and WHERE clause in JOIN.
view sourceprint?

/* LEFT JOIN - WHERE NULL */

SELECT t1.*,t2.*
FROM Table1 t1
LEFT JOIN Table2 t2 ON t1.ID = t2.ID
WHERE t2.ID IS NULL

The above example can also be created using Right Outer JOIN.


Friday 12 October 2012

List of system stored procedure in SQL-Server


Below is the list of some system stored procedure which are helpfull

sp_spaceused [table] - shows you space used by the table
sp_helpindex [table] - shows you index info (same info as sp_help)
sp_helpconstraint [table] - shows you primary/foreign key/defaults and other constraints *
sp_depends [obj] - shows dependencies of an object, for example:
sp_depends [sproc] - shows what tables etc are affected/used by this stored proc
sp_rename [obj] - for renaming database objects (tables, columns, indexes, etc.)
sp_tables - Shows you all the table name in the schema
sp_datatype_info - Shows you all the information for datatype
sp_pkeys [table] - Shows you list of primary key into a table
sp_fkeys [table] - gives the list of foreign key and te tables name in which they are used
sp_databases - gives the list of all the databases 

This query will give us all the stored procedure name in to database
SELECT * FROM sys.procedures;

If you want to see system stored procedure then you can use below query.
SELECT * FROM sys.all_objects WHERE schema_id = 4;

Monday 8 October 2012

Tables with there columnname and its datatype

Below query will gives us all the tables with there columnname and its datatype from a schema.

=====================================================

SELECT Table_Schema, Table_Name, Column_Name, Data_Type
FROM information_schema.columns
WHERE table_name in ( select name from dbo.sysobjects
where xtype = 'U' )
order by table_schema, table_name

Thx,
Rahul

Sunday 7 October 2012

Remove SQL Server database from single-user mode to Multi-User Mode

This query should be executed in Master Database(System Database)... 


select d.name, d.dbid, spid, login_time, nt_domain, nt_username, loginame
  from sysprocesses p inner join sysdatabases d on p.dbid = d.dbid
 where d.name = 'db_Name'
GO

kill sp_id

exec sp_dboption 'db_Name', 'single user', 'FALSE'

Thursday 4 October 2012

Database coding conventions and guidelines


Databases are the heart and soul of many of the recent enterprise applications and it is very essential to pay special attention to database programming.

Databases are the heart and soul of many of the recent enterprise applications and it is very essential to pay special attention to database programming. I've seen in many occasions where database programming is overlooked, thinking that it's something easy and can be done by anyone. This is wrong. For a better performing database you need a real DBA and a specialist database programmer, let it be Microsoft SQL Server, Oracle, Sybase, DB2 or whatever! If you don't use database specialists during your development cycle, database often ends up becoming the performance bottleneck. I decided to write this article, to put together some of the database programming best practices, so that my fellow DBAs and database developers can benefit!

Here are some of the programming guidelines, best practices, keeping quality, performance and maintainability in mind.

Decide upon a database naming convention, standardize it across your organization and be consistent in following it. It helps make your code more readable and understandable

Do not depend on undocumented functionality. The reasons being:
- You will not get support from Microsoft, when something goes wrong with your undocumented code
- Undocumented functionality is not guaranteed to exist (or behave the same) in a future release or service pack, there by breaking your code
• Try not to use system tables directly. System table structures may change in a future release. Wherever possible, use the sp_help* stored procedures or INFORMATION_SCHEMA views. There will be situattions where you cannot avoid accessing system table though!
• Make sure you normalize your data at least till 3rd normal form. At the same time, do not compromize on query performance. A little bit of denormalization helps queries perform faster.
• Write comments in your stored procedures, triggers and SQL batches generously, whenever something is not very obvious. This helps other programmers understand your code clearly. Don't worry about the length of the comments, as it won't impact the performance, unlike interpreted languages like ASP 2.0.
• Do not use SELECT * in your queries. Always write the required column names after the SELECT statement, like SELECT CustomerID, CustomerFirstName, City. This technique results in less disk IO and less network traffic and hence better performance.
• Try to avoid server side cursors as much as possible. Always stick to 'set based approach' instead of a 'procedural approach' for accessing/manipulating data. Cursors can be easily avoided by SELECT statements in many cases. If a cursor is unavoidable, use a simpleWHILE loop instead, to loop through the table. I personally tested and concluded that a WHILE loop is faster than a cursor most of the times. But for a WHILE loop to replace a cursor you need a column (primary key or unique key) to identify each row uniquely and I personally believe every table must have a primary or unique key.
• Avoid the creation of temporary tables while processing data, as much as possible, as creating a temporary table means more disk IO. Consider advanced SQL or views or table variables of SQL Server 2000 or derived tables, instead of temporary tables. Keep in mind that, in some cases, using a temporary table performs better than a highly complicated query.
• Try to avoid wildcard characters at the beginning of a word while searching using the LIKE keyword, as that results in an index scan, which is defeating the purpose of having an index. The following statement results in an index scan, while the second statement results in an index seek:

1. SELECT LocationID FROM Locations WHERE Specialities LIKE '%pples'
2. SELECT LocationID FROM Locations WHERE Specialities LIKE 'A%s'

Also avoid searching with not equals operators (<> and NOT) as they result in table and index scans. If you must do heavy text-based searches, consider using the Full-Text search feature of SQL Server for better performance.
• Use 'Derived tables' wherever possible, as they perform better. Consider the following query to find the second highest salary from Employees table:


SELECT MIN(Salary)
FROM Employees
WHERE EmpID IN
(
SELECT TOP 2 EmpID
FROM Employees
ORDER BY Salary Desc
)

The same query can be re-written using a derived table as shown below, and it performs twice as fast as the above query:

SELECT MIN(Salary)
FROM
(
SELECT TOP 2 Salary
FROM Employees
ORDER BY Salary Desc
) AS A

This is just an example, the results might differ in different scenarios depending upon the database design, indexes, volume of data etc. So, test all the possible ways a query could be written and go with the efficient one. With some practice and understanding of 'how SQL Server optimizer works', you will be able to come up with the best possible queries without this trial and error method.
• While designing your database, design it keeping 'performance' in mind. You can't really tune performance later, when your database is in production, as it involves rebuilding tables/indexes, re-writing queries. Use the graphical execution plan in Query Analyzer or SHOWPLAN_TEXT or SHOWPLAN_ALL commands to analyze your queries. Make sure your queries do 'Index seeks' instead of 'Index scans' or 'Table scans'. A table scan or an index scan is a very bad thing and should be avoided where possible (sometimes when the table is too small or when the whole table needs to be processed, the optimizer will choose a table or index scan).
• Prefix the table names with owner names, as this improves readability, avoids any unnecessary confusions. Microsoft SQL Server Books Online even states that qualifying tables names, with owner names helps in execution plan reuse.
• Use SET NOCOUNT ON at the beginning of your SQL batches, stored procedures and triggers in production environments, as this suppresses messages like '(1 row(s) affected)' after executing INSERT, UPDATE, DELETE and SELECT statements. This inturn improves the performance of the stored procedures by reducing the network traffic.
• Use the more readable ANSI-Standard Join clauses instead of the old style joins. With ANSI joins the WHERE clause is used only for filtering data. Where as with older style joins, the WHERE clause handles both the join condition and filtering data. The first of the following two queries shows an old style join, while the second one shows the new ANSI join syntax:

SELECT a.au_id, t.title
FROM titles t, authors a, titleauthor ta
WHERE
a.au_id = ta.au_id AND
ta.title_id = t.title_id AND
t.title LIKE '%Computer%'


SELECT a.au_id, t.title
FROM authors a
INNER JOIN
titleauthor ta
ON
a.au_id = ta.au_id
INNER JOIN
titles t
ON
ta.title_id = t.title_id
WHERE t.title LIKE '%Computer%'

Be aware that the old style *= and =* left and right outer join syntax may not be supported in a future release of SQL Server, so you are better off adopting the ANSI standard outer join syntax.

• Do not prefix your stored procedure names with 'sp_'. The prefix sp_ is reserved for system stored procedure that ship with SQL Server. Whenever SQL Server encounters a procedure name starting with sp_,, it first tries to locate the procedure in the master database, then looks for any qualifiers (database, owner) provided, then using dbo as the owner. So, you can really save time in locating the stored procedure by avoiding sp_ prefix. But there is an exception! While creating general purpose stored procedures that are called from all your databases, go ahead and prefix those stored procedure names with sp_ and create them in the master database.
• Views are generally used to show specific data to specific users based on their interest. Views are also used to restrict access to the base tables by granting permission on only views. Yet another significant use of views is that, they simplify your queries. Incorporate your frequently required complicated joins and calculations into a view, so that you don't have to repeat those joins/calculations in all your queries, instead just select from the view.
• Use 'User Defined Datatypes', if a particular column repeats in a lot of your tables, so that the datatype of that column is consistent across all your tables.
• Do not let your front-end applications query/manipulate the data directly using SELECT or INSERT/UPDATE/DELETE statements. Instead, create stored procedures, and let your applications access these stored procedures. This keeps the data access clean and consistent across all the modules of your application, at the same time centralizing the business logic within the database.
• Try not to use text, ntext datatypes for storing large textual data. 'text' datatype has some inherent problems associated with it. You can not directly write, update text data using INSERT, UPDATE statements (You have to use special statements like READTEXT, WRITETEXT and UPDATETEXT). There are a lot of bugs associated with replicating tables containing text columns. So, if you don't have to store more than 8 KB of text, use char(8000) or varchar(8000)datatypes.
• If you have a choice, do not store binary files, image files (Binary large objects or BLOBs) etc. inside the database. Instead store the path to the binary/image file in the database and use that as a pointer to the actual binary file. Retrieving, manipulating these large binary files is better performed outside the database and after all, database is not meant for storing files.
• Use char data type for a column, only when the column is non-nullable. If a char column is nullable, it is treated as a fixed length column in SQL Server 7.0+. So, a char(100), when NULL, will eat up 100 bytes, resulting in space wastage. So, use varchar(100) in this situation. Of course, variable length columns do have a very little processing overhead over fixed length columns. Carefully choose between char and varchar depending up on the length of the data you are going to store.
• Avoid dynamic SQL statements as much as possible. Dynamic SQL tends to be slower than static SQL, as SQL Server must generate an execution plan every time at runtime. IF and CASE statements come in handy to avoid dynamic SQL. Another major disadvantage of using dynamic SQL is that, it requires the users to have direct access permissions on all accessed objects like tables and views. Generally, users are given access to the stored procedures which reference the tables, but not directly on the tables. In this case, dynamic SQL will not work. Consider the following scenario, where a user named 'dSQLuser' is added to the pubs database, and is granted access to a procedure named 'dSQLproc', but not on any other tables in the pubs database. The procedure dSQLproc executes a direct SELECT on titles table and that works. The second statement runs the same SELECT on titles table, using dynamic SQL and it fails with the following error:

Server: Msg 229, Level 14, State 5, Line 1
SELECT permission denied on object 'titles', database 'pubs', owner 'dbo'.

To reproduce the above problem, use the following commands:

sp_addlogin 'dSQLuser'
GO
sp_defaultdb 'dSQLuser', 'pubs'
USE pubs
GO
sp_adduser 'dSQLUser', 'dSQLUser'
GO
CREATE PROC dSQLProc
AS
BEGIN
SELECT * FROM titles WHERE title_id = 'BU1032' --This works
DECLARE @str CHAR(100)
SET @str = 'SELECT * FROM titles WHERE title_id = BU1032'
EXEC (@str) --This fails
END
GO
GRANT EXEC ON dSQLProc TO dSQLuser
GO

Now login to the pubs database using the login dSQLuser and execute the procedure dSQLproc to see the problem.
• Consider the following drawbacks before using IDENTITY property for generating primary keys. IDENTITY is very much SQL Server specific, and you will have problems if you want to support different database backends for your application.IDENTITY columns have other inherent problems. IDENTITY columns run out of numbers one day or the other. Numbers can't be reused automatically, after deleting rows. Replication and IDENTITY columns don't always get along well. So, come up with an algorithm to generate a primary key, in the front-end or from within the inserting stored procedure. There could be issues with generating your own primary keys too, like concurrency while generating the key, running out of values. So, consider both the options and go with the one that suits you well.
• Minimize the usage of NULLs, as they often confuse the front-end applications, unless the applications are coded intelligently to eliminate NULLs or convert the NULLs into some other form. Any expression that deals with NULL results in a NULL output. ISNULL and COALESCE functions are helpful in dealing with NULL values. Here's an example that explains the problem:

Consider the following table, Customers which stores the names of the customers and the middle name can be NULL.

CREATE TABLE Customers
(
FirstName varchar(20),
MiddleName varchar(20),
LastName varchar(20)
)

Now insert a customer into the table whose name is Tony Blair, without a middle name:

INSERT INTO Customers
(FirstName, MiddleName, LastName)
VALUES ('Tony',NULL,'Blair')

The following SELECT statement returns NULL, instead of the customer name:

SELECT FirstName + ' ' + MiddleName + ' ' + LastName FROM Customers

To avoid this problem, use ISNULL as shown below:

SELECT FirstName + ' ' + ISNULL(MiddleName + ' ','') + LastName FROM Customers
• Use Unicode datatypes like nchar, nvarchar, ntext, if your database is going to store not just plain English characters, but a variety of characters used all over the world. Use these datatypes, only when they are absolutely needed as they need twice as much space as non-unicode datatypes.
• Always use a column list in your INSERT statements. This helps in avoiding problems when the table structure changes (like adding a column). Here's an example which shows the problem.

Consider the following table:

CREATE TABLE EuropeanCountries
(
CountryID int PRIMARY KEY,
CountryName varchar(25)
)

Here's an INSERT statement without a column list , that works perfectly:

INSERT INTO EuropeanCountries
VALUES (1, 'Ireland')

Now, let's add a new column to this table:


ALTER TABLE EuropeanCountries
ADD EuroSupport bit

Now run the above INSERT statement. You get the following error from SQL Server:

Server: Msg 213, Level 16, State 4, Line 1
Insert Error: Column name or number of supplied values does not match table definition.

This problem can be avoided by writing an INSERT statement with a column list as shown below:

INSERT INTO EuropeanCountries
(CountryID, CountryName)
VALUES (1, 'England')
• Perform all your referential integrity checks, data validations using constraints (foreign key and check constraints). These constraints are faster than triggers. So, use triggers only for auditing, custom tasks and validations that can not be performed using these constraints. These constraints save you time as well, as you don't have to write code for these validations and the RDBMS will do all the work for you.
• Always access tables in the same order in all your stored procedures/triggers consistently. This helps in avoiding deadlocks. Other things to keep in mind to avoid deadlocks are: Keep your transactions as short as possible. Touch as less data as possible during a transaction. Never, ever wait for user input in the middle of a transaction. Do not use higher level locking hints or restrictive isolation levels unless they are absolutely needed. Make your front-end applications deadlock-intelligent, that is, these applications should be able to resubmit the transaction incase the previous transaction fails with error 1205. In your applications, process all the results returned by SQL Server immediately, so that the locks on the processed rows are released, hence no blocking.
• Offload tasks like string manipulations, concatenations, row numbering, case conversions, type conversions etc. to the front-end applications, if these operations are going to consume more CPU cycles on the database server (It's okay to do simple string manipulations on the database end though). Also try to do basic validations in the front-end itself during data entry. This saves unnecessary network roundtrips.
• If back-end portability is your concern, stay away from bit manipulations with T-SQL, as this is very much RDBMS specific. Further, using bitmaps to represent different states of a particular entity conflicts with the normalization rules.
• Consider adding a @Debug parameter to your stored procedures. This can be of bit data type. When a 1 is passed for this parameter, print all the intermediate results, variable contents using SELECT or PRINT statements and when 0 is passed do not print debug information. This helps in quick debugging of stored procedures, as you don't have to add and remove these PRINT/SELECT statements before and after troubleshooting problems.
• Do not call functions repeatedly within your stored procedures, triggers, functions and batches. For example, you might need the length of a string variable in many places of your procedure, but don't call the LEN function whenever it's needed, instead, call the LEN function once, and store the result in a variable, for later use.
• Make sure your stored procedures always return a value indicating the status. Standardize on the return values of stored procedures for success and failures. The RETURN statement is meant for returning the execution status only, but not data. If you need to return data, use OUTPUT parameters.
• If your stored procedure always returns a single row resultset, consider returning the resultset using OUTPUT parameters instead of a SELECT statement, as ADO handles output parameters faster than resultsets returned by SELECT statements.
• Always check the global variable @@ERROR immediately after executing a data manipulation statement (like INSERT/UPDATE/DELETE), so that you can rollback the transaction in case of an error (@@ERROR will be greater than 0 in case of an error). This is important, because, by default, SQL Server will not rollback all the previous changes within a transaction if a particular statement fails. This behavior can be changed by executing SET XACT_ABORT ON. The @@ROWCOUNT variable also plays an important role in determining how many rows were affected by a previous data manipulation (also, retrieval) statement, and based on that you could choose to commit or rollback a particular transaction.
• To make SQL Statements more readable, start each clause on a new line and indent when needed. Following is an example:

SELECT title_id, title
FROM titles
WHERE title LIKE 'Computing%' AND
title LIKE 'Gardening%'
• Though we survived the Y2K, always store 4 digit years in dates (especially, when using char or int datatype columns), instead of 2 digit years to avoid any confusion and problems. This is not a problem with datetime columns, as the century is stored even if you specify a 2 digit year. But it's always a good practice to specify 4 digit years even with datetime datatype columns.
• In your queries and other SQL statements, always represent date in yyyy/mm/dd format. This format will always be interpreted correctly, no matter what the default date format on the SQL Server is. This also prevents the following error, while working with dates:

Server: Msg 242, Level 16, State 3, Line 2
The conversion of a char data type to a datetime data type resulted in an out-of-range datetime value.
• As is true with any other programming language, do not use GOTO or use it sparingly. Excessive usage of GOTO can lead to hard-to-read-and-understand code.

• Do not forget to enforce unique constraints on your alternate keys.

• Always be consistent with the usage of case in your code. On a case insensitive server, your code might work fine, but it will fail on a case sensitive SQL Server if your code is not consistent in case. For example, if you create a table in SQL Server or database that has a case-sensitive or binary sort order, all references to the table must use the same case that was specified in the CREATE TABLE statement. If you name the table as 'MyTable' in the CREATE TABLE statement and use 'mytable' in the SELECT statement, you get an 'object not found' or 'invalid object name' error.
• Though T-SQL has no concept of constants (like the ones in C language), variables will serve the same purpose. Using variables instead of constant values within your SQL statements, improves readability and maintainability of your code. Consider the following example:

UPDATE dbo.Orders
SET OrderStatus = 5
WHERE OrdDate < '2001/10/25'

The same update statement can be re-written in a more readable form as shown below:

DECLARE @ORDER_PENDING int
SET @ORDER_PENDING = 5

UPDATE dbo.Orders
SET OrderStatus = @ORDER_PENDING
WHERE OrdDate < '2001/10/25'

• Do not use the column numbers in the ORDER BY clause as it impairs the readability of the SQL statement. Further, changing the order of columns in the SELECT list has no impact on the ORDER BY when the columns are referred by names instead of numbers. Consider the following example, in which the second query is more readable than the first one:

SELECT OrderID, OrderDate
FROM Orders
ORDER BY 2

SELECT OrderID, OrderDate
FROM Orders
ORDER BY OrderDate

Friday 28 September 2012

Repeate String N Times using REPLICATE function


In sql-server we have a function which repeats the string n number of times.

Function : REPLICATE(string,int)

Example : Select REPLICATE('rahul ',5)

Result : rahul rahul rahul rahul rahul 

Thursday 27 September 2012

Length of LOB data () to be replicated exceeds configured maximum 65536


Whenever you try to replicate data from a database that saves the images(filestream) in the database, and you have included those images in your replication, then you might get this error

Length of LOB data (583669) to be replicated exceeds configured maximum 65536


using TSQL
sp_configure 'max text repl size', '2147483647'
GO
RECONFIGURE

Tuesday 25 September 2012

Find the statictics on an Object and Drop them


Hi,

With the below query we can get the statictics detail of an object and drop them

SELECT name, OBJECT_NAME(OBJECT_ID) AS ObjectName
FROM sys.stats
WHERE auto_created = 1
and OBJECT_NAME(OBJECT_ID) = '<OBJECT_NAME>';

DROP STATISTICS <OBJECT_NAME>.PrimaryKey

Get the active connection of SQL-Server

Hi,

Below query will help to get all the active connection list in SSMS.


SELECT des.program_name,
des.login_name,
des.host_name,
COUNT(des.session_id) [Connections]
FROM sys.dm_exec_sessions des
INNER JOIN sys.dm_exec_connections DEC
ON des.session_id = DEC.session_id
WHERE des.is_user_process = 1
AND des.status != 'running'
GROUP BY des.program_name,
des.login_name,
des.host_name
--,der.database_id
HAVING COUNT(des.session_id) > 2
ORDER BY COUNT(des.session_id) DESC

Wednesday 16 May 2012

Get random values in TOP predicate using NEWID()

In SQL-Server we have NEWID() function which will give us a new id every time we can see this by executing the below statement.
       SELECT NEWID()


We can use this function in our order by clause while using the TOP Predicate.


Example : 


 SELECT TOP(3)
        EmployeeName,
        Salary
 FROM 
Employee
 ORDER BY
NEWID()


This statement will give us every time different TOP 3 records.


Thx,
RS
 



Tuesday 15 May 2012

Configure Report Manager URL and TargetServerURL path from SQL-Server for reports

Yesterday I installed Sql-Server 2008 R2 in may machine and I tried to log in on the report manager with my old URL but I was not able to find that URL. So I search on the internet and I found that

Report Manager URL is : http://servername/reports
TargetServerURL is : http://servername/reportserver

This is not absolutely correct because after tried with this url also I was getting the same error.

For correct url we need to go in configuration toold of the sql-server in side the start menu.

Then we need to go in web service URL in this we need to give virtual directory path. This will become our TargetServerURL in my case it is : http://servername/ReportServer_MSSQLSERVERR2




Then we need to go in Report Manager URL and set Virtual Directory Path. This will become our Report Manager URL in my case it is : http://servername/Reports_MSSQLSERVERR2





Thx,
RS

Monday 14 May 2012

10 Easy steps to walk through with SSRS

In this post I am giving the 10 easy steps to create the SSRS report.

Step 1 : Write a SQL function or stored procedure which we will use in SSRS.


Step 2 : Open BIDS(Business Intelligence Development Studio) and Add new project. In side that project open solution explorer and right click on report folder and click on add new item.


Step 3 : In this dialogue select Report and Give it a name like CustomerReport


Step 4 : Now Click on View and select Report Data. In this explorer select Data Source folder and right click and say Add Data Source


Step 5 : In the below dialogue box click on edit and give the datasource connection details.


Step 6 : Now click on Add dataset


Step 7 : Now in the below dialogue box select stored procedure radio button and give the your stored procedure name in the textbox.


Step 8 : Right Click on report and click on insert then you will get the list of control for the report. Select Page Header, Page Footer, table and one text box in header and two text box in footer.


Step 9 : Now go into the table properties and assign dataset to it. Now give the field name to each column. In the header text box write "Customer Details". At the footer in one text box write '= "Page:  " & Globals!PageNumber & "  of  " & Globals!TotalPages' this will give the page count and in second text box write '=FORMAT(Globals!ExecutionTime,"MM/dd/yyyy hh:mm tt")' this will give the report execution time.


Step 10 : Now Click on preview and see the report.


Thx,
RS

Attatch a .MDF file without .Ldf file

When you want to add a mdf file which does ot have its corresponding ldf file then you can use the following command. This command will create the database and restore the mdf file.

sp_attach_single_file_db
           @dbname='AdventureWorks',
          @physname='D:\AdventureWorksLT2008_Data.mdf'

When I was using this command then I got One error i.e. "Access is denied". For solving this problem go to the physician location of the mdf file and right click --> Properties --> Security. Then click on edit and give the full control to this file. then this error will not come.


Thx,
RS

Wednesday 9 May 2012

SQL Server Interview Questions and Answers


1. Which TCP/IP port does SQL Server run on? How can it be changed?
SQL Server runs on port 1433. It can be changed from the Network Utility TCP/IP properties.


2. What are the difference between clustered and a non-clustered index?
A clustered index is a special type of index that reorders the way records in the table are physically stored. Therefore table can have only one clustered index. The leaf nodes of a clustered index contain the data pages.
A non clustered index is a special type of index in which the logical order of the index does not match the physical stored order of the rows on disk. The leaf node of a non clustered index does not consist of the data pages. Instead, the leaf nodes contain index rows.

3. What is OLTP (Online Transaction Processing)?
In OLTP - online transaction processing systems relational database design use the discipline of data modeling and generally follow the Codd rules of data normalization in order to ensure absolute data integrity. Using these rules complex information is broken down into its most simple structures (a table) where all of the individual atomic level elements relate to each other and satisfy the normalization rules.

4. What's the difference between a primary key and a unique key?
Both primary key and unique key enforces uniqueness of the column on which they are defined. But by default primary key creates a clustered index on the column, where are unique creates a nonclustered index by default. Another major difference is that, primary key doesn't allow NULLs, but unique key allows one NULL only.

5. What is difference between DELETE and TRUNCATE commands?
Delete command removes the rows from a table based on the condition that we provide with a WHERE clause. Truncate will actually remove all the rows from a table and there will be no data in the table after we run the truncate command.

TRUNCATE:
TRUNCATE is faster and uses fewer system and transaction log resources than DELETE.
TRUNCATE removes the data by deallocating the data pages used to store the table's data, and only the page deallocations are recorded in the transaction log.
TRUNCATE removes all rows from a table, but the table structure, its columns, constraints, indexes and so on, remains. The counter used by an identity for new rows is reset to the seed for the column.
You cannot use TRUNCATE TABLE on a table referenced by a FOREIGN KEY constraint. Because TRUNCATE TABLE is not logged, it cannot activate a trigger.
TRUNCATE cannot be rolled back.
TRUNCATE is DDL Command.
TRUNCATE Resets identity of the table

DELETE:
DELETE removes rows one at a time and records an entry in the transaction log for each deleted row.
If you want to retain the identity counter, use DELETE instead. If you want to remove table definition and its data, use the DROP TABLE statement.
DELETE Can be used with or without a WHERE clause
DELETE Activates Triggers.
DELETE can be rolled back.
DELETE is DML Command.
DELETE does not reset identity of the table.

Note: DELETE and TRUNCATE both can be rolled back when surrounded by TRANSACTION if the current session is not closed. If TRUNCATE is written in Query Editor surrounded by TRANSACTION and if session is closed, it can not be rolled back but DELETE can be rolled back.

6. Can a stored procedure call itself or recursive stored procedure? How much level SP nesting is possible?
Yes. Because Transact-SQL supports recursion, you can write stored procedures that call themselves. Recursion can be defined as a method of problem solving wherein the solution is arrived at by repetitively applying it to subsets of the problem. A common application of recursive logic is to perform numeric computations that lend themselves to repetitive evaluation by the same processing steps. Stored procedures are nested when one stored procedure calls another or executes managed code by referencing a CLR routine, type, or aggregate. You can nest stored procedures and managed code references up to 32 levels.

7. What is the difference between a Local and a Global temporary table?
A local temporary table exists only for the duration of a connection or, if defined inside a compound statement, for the duration of the compound statement.
A global temporary table remains in the database permanently, but the rows exist only within a given connection. When connection is closed, the data in the global temporary table disappears. However, the table definition remains with the database for access when database is opened next time.

8. What is the STUFF function and how does it differ from the REPLACE function?
STUFF function is used to overwrite existing characters. Using this syntax, STUFF (string_expression, start, length, replacement_characters), string_expression is the string that will have characters substituted, start is the starting position, length is the number of characters in the string that are substituted, and replacement_characters are the new characters interjected into the string. REPLACE function to replace existing characters of all occurrences. Using the syntax REPLACE (string_expression, search_string, replacement_string), where every incidence of search_string found in the string_expression will be replaced with replacement_string.

9. What is PRIMARY KEY?
A PRIMARY KEY constraint is a unique identifier for a row within a database table. Every table should have a primary key constraint to uniquely identify each row and only one primary key constraint can be created for each table. The primary key constraints are used to enforce entity integrity.

10. What is UNIQUE KEY constraint?
A UNIQUE constraint enforces the uniqueness of the values in a set of columns, so no duplicate values are entered. The unique key constraints are used to enforce entity integrity as the primary key constraints.

11. What is FOREIGN KEY?
A FOREIGN KEY constraint prevents any actions that would destroy links between tables with the corresponding data values. A foreign key in one table points to a primary key in another table. Foreign keys prevent actions that would leave rows with foreign key values when there are no primary keys with that value. The foreign key constraints are used to enforce referential integrity.

12. What is CHECK Constraint?
A CHECK constraint is used to limit the values that can be placed in a column. The check constraints are used to enforce domain integrity.

13. What is NOT NULL Constraint?
A NOT NULL constraint enforces that the column will not accept null values. The not null constraints are used to enforce domain integrity, as the check constraints.

14. How to get @@ERROR and @@ROWCOUNT at the same time?
If @@Rowcount is checked after Error checking statement then it will have 0 as the value of @@Recordcount as it would have been reset. And if @@Recordcount is checked before the error-checking statement then @@Error would get reset. To get @@error and @@rowcount at the same time do both in same statement and store them in local variable.
SELECT @RC = @@ROWCOUNT, @ER = @@ERROR

15. What are the advantages of using Stored Procedures?
Stored procedure can reduced network traffic and latency, boosting application performance.
Stored procedure execution plans can be reused, staying cached in SQL Server's memory, reducing server overhead.
Stored procedures help promote code reuse.
Stored procedures can encapsulate logic. You can change stored procedure code without affecting clients.
Stored procedures provide better security to your data.

16. What is a table called, if it has neither Cluster nor Non-cluster Index? What is it used for?
Unindexed table or Heap. Microsoft Press Books and Book on Line (BOL) refers it as Heap. A heap is a table that does not have a clustered index and, therefore, the pages are not linked by pointers. The IAM pages are the only structures that link the pages in a table together. Unindexed tables are good for fast storing of data. Many times it is better to drop all indexes from table and then do bulk of inserts and to restore those indexes after that.

17. Can SQL Servers linked to other servers like Oracle?
SQL Server can be linked to any server provided it has OLE-DB provider from Microsoft to allow a link. E.g. Oracle has an OLE-DB provider for oracle that Microsoft provides to add it as linked server to SQL Server group.

18. What is an execution plan? When would you use it? How would you view the execution plan?
An execution plan is basically a road map that graphically or textually shows the data retrieval methods chosen by the SQL Server query optimizer for a stored procedure or ad- hoc query and is a very useful tool for a developer to understand the performance characteristics of a query or stored procedure since the plan is the one that SQL Server will place in its cache and use to execute the stored procedure or query. From within Query Analyzer is an option called "Show Execution Plan" (located on the Query drop-down menu). If this option is turned on it will display query execution plan in separate window when query is ran again.