Jeff's SQL Server Blog

Random Thoughts & Cartesian Products with Microsoft SQL Server
posts - 157, comments - 2686, trackbacks - 64

My Links

Advertisement

News

Welcome to my weblog. My name is Jeff Smith, I am software developer in Boston, MA and I was recently named a 2009 SQL Server MVP. Check in frequently for tips, tricks, commentary and ideas on SQL Server and .NET programming.


Subscribe





Archives

Post Categories

Programming

Sports

SQL

Passing an Array or Table Parameter to a Stored Procedure

Introduction

SQL is a set-based language, and often we wish to pass sets of data as a parameter to a stored procedure.  For example, you might wish to pass a set of Customers to an invoice generating stored procedure, or a set of employees which for you’d like to calculate a particular bonus.  Sometimes we need even more complex data than simple lists; you might have a time sheet application that lets a user create or edit a weekly time sheet at the client, and then you need to accept and validate an entire week of data when submitted.

A common way to pass large sets of data to a stored procedure is to use an XML or CSV character string.  This can work, but it can be complicated to encode and then decode the strings passed in, and you don’t have a clean way to validate the data as it is passed.  For example, you cannot force the CSV to accept only a list of valid customer ID’s, or a particular data type.  The CSV format can also get very complicated if there is a lot of data passed in with multiple columns, especially if the data might contain characters that need to be escaped and then un-escaped to work in a CSV format.

SQL Server 2000 and 2005 do have a table data type, but you cannot pass a table variable to a stored procedure as an argument, so that doesn’t work.

So, is there any easy was a way for a stored procedure to accept really complicated data, and for that data to be completely validated with foreign key constraints, strong data types, and possibly even CHECK constraints?  That would certainly make things quicker and easier.  Well, believe it or not, SQL Server has everything you need already in place to accept large, complicated sets of data as parameters in a stored procedure!

A Simplistic Single User Approach

Let’s create a simple example: Suppose we have an application that tracks Customer data, and at the client application we need to set it up so that users can quickly edit the Status of many customers all at once, and then click SAVE to update the database.  Of course, this example can easily be expanded to include other attributes of the customer as well, such as the customer Name or perhaps a customer type.  

Our stored procedure will accept a set of rows with two columns: CustomerID and StatusID.  Both CustomerID and StatusID must be valid entries in the Customers and Status tables, respectively.

What we can do is simply create a regular, permanent table that we can use to store the set of parameter values:

Create table Param_CustomerStatus
(
    CustomerID int references Customers (CustomerID) primary key,
    StatusID int references Status (StatusID) not null
)

I’ll call that table a parameter table from now on.  With that table in place, we can write our stored procedure that updates the customer status for all of those parameter values, like this:

create procedure UpdateCustomerStatus
as
    update Customer
        set StatusID = x.StatusID
    from Customer
    inner join
Param_CustomerStatus x on
      
Customer.CustomerID = x.CustomerID

    delete from Param_CustomerStatus

To call our procedure, all we need to do is load up the Param_CustomerStatus table with our values and then call the UpdateCustomerStatus stored procedure.  Simple and efficient!  We know that our stored procedure only “accepts” valid CustomerID and StatusID values, and we can quickly and efficiently use standard INSERT statements to load up our parameter table without worrying about XML or CSV strings and all the headaches that go with it.  

To allow for a client application to call this stored procedure, we can allow the client to use SQLBulkCopy or something similar to directly bulk load the CustomerID/StatusID values into this table, or we can create a stored procedure that clients can call over and over to add values to it:

create procedure Param_CustomerStatusAdd @CustomerID int, @StatusID int
as
    insert into Param_CustomerStatus (CustomerID, StatusID)
    values (@CustomerID, @StatusID)


With these components in place, you can essentially pass very complex data to any stored procedure, just by simply filling up the parameter table first.

The Problem

There’s a catch to this simplistic approach, of course, and I’m sure most of you already spotted it: What happens in a multi-user environment?   Two users might be filling up the Param_CustomerStatus table at the same time, or one user might empty it out while another is trying to fill it, causing all sorts of havoc when the UpdateCustomerStatus procedure is run.   

A More Robust Multi-User Solution

Here’s a quick and easy way to allow to this work for multi-user environments: Before loading parameter values into the parameter table, you ask for a unique parameter key value.  Once you have the parameter key, you include that value with your parameter values when inserting rows into the parameter table. Then, when you call the stored procedure, you simply pass in the parameter key again. The stored procedure will use the key when accessing and emptying out the parameter table.

Let’s create a simple, single ParameterKey table that all of our parameter tables will use:

create table ParameterKeys
(
    ParameterKey int identity primary key,
    DateCreated datetime not null
)

With this table in place, we create our new and improved multi-user Param_CustomerStatus table:

create table Param_CustomerStatus
(
    ParameterKey int references ParameterKeys (ParameterKey) on delete cascade,
    CustomerID int references Customers (CustomerID) on delete cascade,
    StatusID int references Status (StatusID) on delete cascade,
    Primary key (ParameterKey, CustomerID)
)

Notice that now the primary key of our parameter table is the combination of ParameterKey and CustomerID, allowing multiple users to put entries into this table at the same time without violating PK constraints, as long as they all have their own unique parameter key.  

You might also notice that we use cascading deletes on the foreign key constraints to ensure that our parameter table never stops the database from being able to update or delete entities in our data.

Now that we have our table in place, a simple stored procedure lets you get a parameter key when you are ready to fill up a parameter table:
 
create procedure Param_GetKey @ParameterKey int output
as
    delete from ParameterKeys where DateCreated < dateadd(day,-2, getdate())
    insert into ParameterKeys (DateCreated) values (getdate())
    set @ParameterKey = scope_identity()
    
Notice that this stored procedure also removes any old parameter key values and any associated data with it.  This is a nice way to ensure that if something happens abruptly and a parameter table isn’t properly emptied, things stay nice and clean over time.  The cascading delete constraint immediately removes all parameter data for any old keys, all at once, from any and all parameter tables in your database. 

We also can create a stored procedure to delete the data for a particular parameter key, which should be called when you are done using the parameter values:

create procedure Param_DeleteKey @ParameterKey int
as
    delete from ParameterKeys where ParameterKey = @ParameterKey

Finally, we then can simply create our UpdateCustomerStatus procedure like this:

create procedure UpdateCustomerStatus @ParameterKey int
as
    update Customer
        set StatusID = x.StatusID
    from Customer
    inner join
Param_CustomerStatus x on
      
Customer.CustomerID = x.CustomerID
    where
       x.ParameterKey = @ParameterKey

    exec Param_DeleteKey @ParameterKey

It’s as simple as that!  Everything works fine in a multi-user environment, everything is quick and set-based, and you can truly pass very complicated data to stored procedures.  Nothing is stopping you from having multiple parameter tables for multiple parameters (all related to the same ParameterKeys table) and you can even have a single stored procedure accept very complicated, multi-table parameter values.  Best of all, everything is strongly-typed and has foreign key constraints that guarantees that your parameter values are valid.

To allow for clients to insert parameter values to our parameter table, we might also create a stored procedure that the client application can use:

create procedure Param_CustomerStatusAdd @ParameterKey int, @CustomerID int, @StatusID int
as
    insert into Param_CustomerStatus (ParameterKey, CustomerID, StatusID)
    values (@ParameterKey, @CustomerID, @StatusID)

Note that you should add some validation to the stored procedure to alert the client if there is a problem adding the parameter; for example, they might try to add the same customer twice, or insert a customer that doesn't exist, or they might not specify a valid StatusID.  This is yet another great feature of this technique -- as you "prepare" your parameter data, it is also validated by the parameter table constraints!

Here’s a quick example, where we could use a complicated SQL statement to tell us which customers we need to mark as inactive:

declare @key int
exec GetParameterKey @key output

insert into Param_CustomerStatus (ParameterKey, CustomerID, StatusID)
select @key, CustomerID, StatusID
from
-- long, complicated select here --

exec UpdateCustomerStatus @key

Now, clearly this is a simple example – you’d never need to call such a basic stored procedure in this way from your T-SQL code.  But perhaps changing the customer status is more than a simple UPDATE, and it involves updating multiple tables in the database, archiving data, sending notifications, and so on.  This allows you to generate the customer parameter list any way you need in multiple places, but all you need is one single stored procedure that uses that list to do the job in a quick and efficient set-based manner.

A Reporting Example

Here’s perhaps a more realistic example:  Suppose we have a reporting application that let’s the user choose which Customers to include on a report, along with their ordering, and also lets them assign a comment to each customer.  The report then uses this criteria and information to display the results.

First, we need a parameter table for this report:

create table Param_CustomerReportFilter
(
    ParameterKey int references ParameterKeys(ParameterKey) on delete cascade,
    CustomerID int references Customers (CustomerID) on delete cascade,
    Sort int not null,
    Comment varchar(1000),
    Primary key (ParameterKey, CustomerID)
)

Since a client will be setting the parameter values, we can create a stored procedure like this to add parameters to the Param_CustomerReportFilter table:

create procedure Param_CustomerReportFilterAdd
    @ParameterKey int,
    @CustomerID int,
    @Sort int,
    @Comment varchar(1000)
as
    insert into Param_CustomerReportFilter (ParamterKey, CustomerID, Sort, Comment)
    values (@ParameterKey, @CustomerID, @Sort, @Comment)

As before, you should add some error checking and a return value to this stored procedure to inform the client if any problems arose when preparing the parameter data.

Finally, our report will accept a @ParameterKey parameter and join to the Param_CustomerReportFilter table to create its results:

create procedure CustomerReport @ParameterKey int
as
    select
        c.*, f.Comment

    from
        Customers c

    inner join
        Param_CustomerReportFilter f on

            c.CustomerId  = f.CustomerId and
            f.ParameterKey = @ParameterKey
    order by
        f.sort

    exec Param_DeleteKey @ParameterKey

Our client application simply needs to keep track of the customers that the user wants to see on the report, along with the sort and any notes for each.  When the user is done and the report is requested, you simply get a parameter key, load up the Param_CustomerReportFilter table using either SQLBulkCopy or the Param_CustomerReportFilterAdd stored procedure, and then pass in that key when you finally call the CustomerReport stored procedure to return the report data.

A Few Notes

Here’s few more things to think about if you wish to use this technique:

Transaction Logs: Transaction logs can be an issue if you are filling and deleting these parameter tables often.  If you use a simple recovery model, this shouldn’t be an issue. You could move the parameter tables to an alternate database on your server if necessary.

Persistent Parameter Values: You don’t need to empty out the parameter tables each time a stored procedure is called that uses the parameter values; you could make this optional with an additional parameter value (i.e., @EmptyParams bit = 1), or you could just let the parameter table get cleaned out every day or so by the “auto-clean” feature of GetParameterKey.  Persisting the parameter values could be useful if you need to use them over and over for subsequent procedure calls.  

Performance: Once your parameter table is filled, performance will be excellent since it is just a simple, indexed JOIN to the parameter table to get things done.  Filling up the parameter table by executing single row INSERT stored procedures over and over for each parameter value might not be efficient if it is expensive to call database commands many times from the client.  As mentioned, you can increase the performance of this step by using SqlBulkCopy to bulk copy the parameter values to the parameter tables, bypassing the single row insert stored procedures completely.  

Conclusion

Like many simple techniques, there’s nothing special or revolutionary about doing things this way – we are just leveraging existing, basic features of SQL Server to get the job done.  No technique is perfect, but this is a pretty simple way to pass very complex data to a stored procedure that has a clear, clean, predefined structure and that can be validated in many different ways.  There is also no need to worry about clients not formatting CSV or XML values consistently or properly, or that certain limits regarding parameter size or values will be reached that may make working with CSV or XML strings difficult.

(6/27/2007 Update: fixed an issue with the UPDATE statements ... thanks Hugo!)

see also:

Print | posted on Tuesday, June 26, 2007 9:36 AM | Filed Under [ T-SQL Techniques Database Design ]

Feedback

Gravatar

# re: Creating Array or Table Parameters for a Stored Procedure

Hi Jeff,

There is a serious problem with your UpdateCustomerStatus stored procedure - as presented here, it will set StatusID to NULL for each customer not present in the parameter table. I don't think I'd want to fill a table with all 5 million customers and their current status just to be able to update the status of 150 of them in a single go.

I think you want to change the stored proc to


create procedure UpdateCustomerStatus @ParameterKey int
as
update Customer
set StatusID = (select StatusID from
Param_CustomerStatus x
where Customer.CustomerID = x.CustomerID and
x.ParameterKey = @ParameterKey)
where exists (select * from
Param_CustomerStatus x
where Customer.CustomerID = x.CustomerID and
x.ParameterKey = @ParameterKey);

delete from Param_CustomerStatus;

(UNTESTED)
6/27/2007 3:40 AM | Hugo Kornelis
Gravatar

# re: Creating Array or Table Parameters for a Stored Procedure

Hugo -- great catch, thank you! I usually use JOINs on my updates, but I've been trying to do things "the right way" according to Celko and others by using correlated subqueries. I definitely screwed up by missing the WHERE clause. Which is interesting, because the danger of THIS happening seems to outweigh the benefit of using a correlated subquery!

Anyway, thank you so much for catching this, I will update the article.

- Jeff
6/27/2007 8:05 AM | Jeff
Gravatar

# re: Creating Array or Table Parameters for a Stored Procedure

I would use the join in this case. I use the ANSI syntax whenever I can get away with only a single subquery (e.g. if I'm updating the entire table), but if there are multiple subqueries that means multiple index operations on the table. No reason to incur twice the performance cost for little or no gain.

6/27/2007 10:25 AM | Adam Machanic
Gravatar

# re: Creating Array or Table Parameters for a Stored Procedure

Hi Adam -- I definitely agree, in fact, that's how I re-wrote them when I fixed the code. I just tend to not like correlated subqueries in general, especially when they must be repeated multiple times, as you mention.
6/27/2007 10:38 AM | Jeff
Gravatar

# re: Creating Array or Table Parameters for a Stored Procedure

Hey, guys haven't you considered using a TEMP table instead of a fixed table?
That's what temp tables were originally meant for.

Cheers.
6/28/2007 9:20 AM | Boris Janischevsky
Gravatar

# re: Creating Array or Table Parameters for a Stored Procedure

Boris -- Do you mind giving an example of how a temp table will help you do any of the things I've shown in the article? Thanks in advance!

- Jeff
6/28/2007 4:15 PM | Jeff
Gravatar

# re: Creating Array or Table Parameters for a Stored Procedure

Well i thought it was a bad idea to add a procedural(!) language to a relational database and you point out the reasons why.

Having to use such complicated techniques to do something which should be natural in a relational environment surely tells that SQL has severe shortcomings.

I used similar techniques (with temp tables) in Sybase and as Sybase and SQL server has the same roots, i guess temp tables work the same in SQL server.

Temp tables in Sybase are connected to the database session and are only visible in the session they were created in.
When the seesion is closed, the temp table will be dropped.

So you don't have to get a parameter key, you just have to create the temp table and it will be your private place, even if another user (in another session) created a temp table with the same name.

Example:
(Sybase Syntax)

Create table #Param_CustomerStatus
(
CustomerID int not null primary key,
StatusID int not null
)

There's one drawback here: Temporary tables may not have referential integrity constraints.
(At least in Sybase, i don't know if this frestriction is also present in SQL server)
6/29/2007 7:33 AM | Horst Makitta
Gravatar

# re: Creating Array or Table Parameters for a Stored Procedure

There's several drawbacks to the temp table:

1) there is no predefined structure for your parameters; you just create any old templ table, with any old name and column definitions and data, and then call a stored procedure which *hopes* that a temp table will be there named exactly as it expects with the columns that it expects, and that stored procedure will of course get recompiled and a new query plan each time that it is called. This is a not a clean, efficient, well-defined way to do this.

2) as you mentioned, no referential integrity, no guarantee that expected CHECK constraints will be in place, or correct data types, no guarantee of the correct primary key, no indexes, and so on.

3) you don't have the benefit of writing your stored procedures against actual, permanent database objects -- you have to develop it against a temp table that doesn't exist at the time you create the procedure. This also apply to stored procedures that load up the temp table with values from your client.

Creating a clean, well-defined permanent table that defines exactly what the stored procedure accepts is a very easy, short and clear way to do this with very few short coming. The "parameter key" technique is also extremely short and simple.

I fail to see any scenario where a temp table is useful for passing data to a stored procedure for all of these reasons. And, of course, you have to make sure that it stays in scope, which means it must be updated in the same session as the stored procedure call, which isn't always easy or possible, especially if you'd like to use the quick and efficient SQLBulkCopy to fill up your parameters table -- I don't think you can do that with a temp table, and if you could, it would no longer be in scope when you go to call the stored procedure.

If anyone really thinks a temp table is a good idea or it will work cleanly, please, if you don't mind, provide a complete example that actually works. This will help us better to determine the best method.

Thanks for the feedback!!
6/29/2007 8:34 AM | Jeff
Gravatar

# re: Creating Array or Table Parameters for a Stored Procedure

>>Having to use such complicated techniques to do something which should be natural in a relational environment surely tells that SQL has severe shortcomings.

By the way, I have to address this statement ... this is complicated? Writing the stored procedure to accept a "table" is simple:

1. create a parameter table

2. create a stored procedure that accepts the parameter key and uses the parameter table.

you have the one-time overhead of creating the parameter key table + stored proc, but that is also quick and easy and is just a cut and paste operation.

Calling a stored procedure written to accept a table of values in this way is also easy:

1. get a parameter key

2. fill up parameter table

3. call stored procedure, passing the key

Again, to me, this is very, very simple. Not ideal, of course (passing a table variable would be nice), but still very simple and much cleaner and easier than trying to concatenate and extract CSV strings and the like.

Compare this to having the ability to use a table variable as a parameter. To write a stored procedure to accept a table variable, you would:

1. define the table variable's structure

2. write a stored procedure using it

and to call that stored procedure, you would:

1. declare a table variable

2. fill it up with data

3. pass that table variable to your stored procedure as a parameter

Notice any similarities to the "perfect world table variable scenario" and the steps to use the technique shown in the article? They are just about exactly the same!
6/29/2007 8:44 AM | Jeff
Gravatar

# re: Creating Array or Table Parameters for a Stored Procedure

i dislike this method for a single reason:
it depends on data you need in your sproc being loaded into the table outside the sproc.

my opinion is that all params should be imputs because then the sproc is self sufficient
with using those input params.

that's why i'd rather stick to xml :)

however i must admit that the idea is good for specific load requirements and not for general use
but i think you already addressed that :)
6/29/2007 10:01 AM | Mladen
Gravatar

# re: Creating Array or Table Parameters for a Stored Procedure

Much as I probably should have RTFA, I do have some recent experience with something like this; SQL 2005 allows you (via OPENXML) to load XML into a 'table' at which point you can do whatever you want with it...so, create a stored procedure with an XML type parameter, and viola: you have just passed a complex data structure and can do whatever you want with it.

SQL 2005 rules.
7/2/2007 12:22 PM | JStroud
Gravatar

# re: Creating Array or Table Parameters for a Stored Procedure

I agree about XML being useful, but the problem I have is the effort it takes for some client applications to produce the XML and the fact that (as far as I know) there is no way to specify that the XML parameter *must* conform to a certain schema, or even let your client know what it expects in terms of elements and attributes. In addition, when doing things purely on the database server, it seems silly to take a SELECT, convert to XML, pass the XML to a stored procedure, which then extracts the XML back into a table or temp table to do the "work".

I guess basically the advantage of my approach is that it is 100% SQL -- just data, which is nice since everything just uses regular old SQL to get things done. I totally agree that it is not perfect or ideal, but I think it works well enough and it is pretty simple to implement. XML is definitely useful in the right situations as well, of course. Maybe it is a subconscious relational database bias against XML that is causing my reluctance to use it more often!

Thanks, JStroud!!
7/2/2007 12:42 PM | Jeff
Gravatar

# re: Creating Array or Table Parameters for a Stored Procedure

Well, i did not advertise the temp table approach as being superior to the static table approach, i just tried to explain why Boris suggested a temp table.

Some comments:
As said before, temporary tables in Sybase cannot have FK constraints. But they can have primary keys, indexes (e.g. unique indexes to enforce uniqueness) and check constraints.

If your point is that there is no guarantee from the POV of the writer of the Sproc that the table was created the way it should have been created - well there is no guarantee that the regular table wasn't modified in the meantime either.

I also don't agree that the stored proc needs to be recompiled every time the Sproc is called - once per session will be enough.

If you want more data checks, create a second temp table within the stored procedure with all the checks and indexes, then do a

INSERT INTO #inner SELECT * FROM #outer

If this works, the data quality is correct.

However i agree with Mladen here: It would be better if these checks and constraints could be expressed in the definition of the parameters of the stored procedure - and this is why i wrote that the approach with tables is too complicated.

In a programming language that is intended for working with databases, i definitely expect a Set datatype to be present and i expect my parameters can be defined as a specific Set, with all the checks and constraints be described as properties of this Set.

I also agree that XML is not the solution - it will only make things more complex for both the SProc author and the SProc user.
7/2/2007 1:38 PM | Horst Makitta
Gravatar

# re: Creating Array or Table Parameters for a Stored Procedure

Here comes the TEMP table defender:

Everything said above is correct. However, global temporary tables are there in scope for ever. Temp tables can be indexed. And if you need to guarantee their existance, it's just a matter of having them autocreated when SQL Server starts. So other layers of your application don't have to create them.

Referential integrity is an issue - 100% agreed.

The only real benefit that I see in temp table is their temporary nature. I personally don't like the idea of storing non permanent data in a fixed table. With a big customer base (or user base) this is using a too much space for something unuseful. However, if you need the data to linger, that's a whole different animal.

As for optimizations of query plans, I assume the temp table stores a few records per user. And still there are hints to be used, or simply force a store proc to re-compile when it executes (many would detract to this, however it can solve problems when your join data amount becomes unpredictable). Don't forget that stored procs are as good as the time they were created.

Again, the fixed table approach might work fine, if some conditions are met as well. It all depends on the application.

Just a few thoughts. Cool discussion guys!

Cheers.
7/2/2007 1:41 PM | Boris Janischevsky
Gravatar

# re: Creating Array or Table Parameters for a Stored Procedure

>>However, global temporary tables are there in scope for ever. Temp tables can be indexed. And if you need to guarantee their existance, it's just a matter of having them autocreated when SQL Server starts. So other layers of your application don't have to create them.

If you are using global temp tables, then you will have concurrency issues -- two people might be accessing the temp table at the same time, as I discussed in the blog post. Global temp tables definitely don't help you at all and just overcomplicate things, especially if you need to write code to ensure that they exist and are created when the server starts. I fail to see a single benefit of a temp table approach, especially a global temp table.
7/2/2007 2:26 PM | Jeff
Gravatar

# re: Creating Array or Table Parameters for a Stored Procedure

Horst --

>>If your point is that there is no guarantee from the POV of the writer of the Sproc that the table was created the way it should have been created - well there is no guarantee that the regular table wasn't modified in the meantime either.

I hope that when we write any SQL code we can assume that the permanent tables and relations and indexes in our database will remain constant! Otherwise, how can we write anything? If people are randomly altering tables in our schema, then we have bigger things to worry about !

>>I also don't agree that the stored proc needs to be recompiled every time the Sproc is called - once per session will be enough.

It needs to be recompiled each time since the temp table is recreated each time it is called. Unless it is a global temp table, which will have concurrency issues. SQL Server has no way to know what the schema of the temp table is each time, since it can always be different and it is no permanently defined.

>> If you want more data checks, create a second temp table within the stored procedure with all the checks and indexes ... If this works, the data quality is correct.

that is definitely more complicated than what I recommend in article. I see no benefit at all to that approach, and it also doesn't provide the benefit of validating parameter values as they are set. You are adding unnecessary complexity and complication to the process just for your perceived benefit of using temp tables.

>>However i agree with Mladen here: It would be better if these checks and constraints could be expressed in the definition of the parameters of the stored procedure - and this is why i wrote that the approach with tables is too complicated. .... In a programming language that is intended for working with databases, i definitely expect a Set datatype to be present and i expect my parameters can be defined as a specific Set, with all the checks and constraints be described as properties of this Set.

Again, I am not sure why you feel this is "complicated". Give me some psuedo-code for your simplified world. Imagine that SQL supports the exact feature you could ever hope for -- how would this work? I already showed you how it would work with table variables -- it is no less complicated than using the technique I demonstrate, they have exactly the same steps.

>> I also agree that XML is not the solution - it will only make things more complex for both the SProc author and the SProc user.

Agreed.

Thanks for the feedback, Horst!
7/2/2007 2:40 PM | Jeff
Gravatar

# re: Passing Array or Table Parameters to a Stored Procedure

Maybe I'm missing something here, but this parameter table doesn't make sense to me.
You still have to loop through the collection to insert values into the parameter table, so why not just do the update instead?

Isn't that more efficient?

8/5/2007 10:11 PM | Adam
Gravatar

# re: Passing Array or Table Parameters to a Stored Procedure

Adam --

The update is an example. The concept is writing a stored procedure that accepts a list or table of parameters, and ways to pass that list to the stored procedure when calling it.

i.e., If you have a client app that has a list of 600 customers to update, how should it pass those 600 customers to the database to do the update?

8/6/2007 8:28 AM | Jeff
Gravatar

# re: Passing Array or Table Parameters to a Stored Procedure

Jeff

Just thought I'd add an issue to the suggested parameter table.

When using the same approach defined here, if the parameter table is being changed significantly for different executions of the stored procedure, then at some time the query optimizer will get the execution plan wrong for the stored procedure.
The result is that a query that should take 3 seconds can take up to 190 minutes.

The reason is that the statistics on the parameters table are wrong.
While still looking for a better solution, the param table statistics can be updated using “UPDATE STATISTICS parameterTableName PK_parameterTableName WITH FULLSCAN”

10/17/2007 9:01 PM | Matt
Gravatar

# re: Passing an Array or Table Parameter to a Stored Procedure

Hi,

I have a situation where I am updating Multiple Ids in the database.

I have created a stored procedure for this using table type datatype which will accept the Input parameters to stored procedure.

Is it possible to pass the Typed DataTable of ADO.NET as input param ( single input param) against the sqlserver table type datatype.

Thanks in advance.

12/5/2007 5:09 AM | Munish
Gravatar

# re: Passing an Array or Table Parameter to a Stored Procedure

Hi,

I am new to SQL. Is it possible to pass table as parameter instead of table_name inside the stored procedure. For Eg.,

Create proc aaa @table_name varchar(100)
as
begin
select * from [@table_name]
end

It is fine till this.

While executing this one, it throws error. (Eg., exec aaa 'table').

How to pass table table_name as parameter?

Thanks in advance !

-Vishnu







1/24/2008 7:20 AM | Vishnu
Gravatar

# re: Passing an Array or Table Parameter to a Stored Procedure

It is amazing how much discussion there is on the web, not just here, on how to do this incredibly basic but necessary operation. You cannot pass a table into a stored proc or a user defined function. You cannot pass in an array (there is no such thing in SQL). You have to hack some sort of gigantic kluge: make a string (CSV or XML) and parse it, or make an image and parse it, or write a string of SQL and dynamically EXEC it, create a separate table and fill it with parameters (as above described), etc. -- all just to emulate an array parameter! All of the solutions are kluges, involving problems with data integrity or type safety, security holes, performance hits, redundant code proliferation, etc. This is INSANE.

The above solution is as good as anything, but a design problem here is that for every sproc or udf that requires such an array param, you have to have an accompanying param table hanging around with no enforced connection to the code of the sproc. So, with every code change there is the danger of having the param table and the sproc out of sync. And what if you needed 2 or 3 arrays for an sproc? And suppose you had 25 sprocs that each needed 3 array params?? You'll have to make up some sort of naming conventions to help keep some order to the mess. And by then you will have a hard time remembering to fill the param table with the params for a given sproc, since those params are implicit and not in the list of actual sproc params.

If we are supposed to use tables or temporary tables in this sort of way, then tables ought to be allowed in SQL as a param data type directly. After all, they can be used this way indirectly, as the above solution shows. And you can have a table as an output type. What gives?

Sometimes SQL seems like a "toy" language, like MS-DOS batch scripting....

Vishnu, to answer your question: no, you cannot pass a table into an sproc or a udf. You could pass a string in for a table name, and then in the sproc you could build up a string for the SQL query that you want to do, and then run it using EXEC. That is not a safe thing to do, though. It is buggy, hard to maintain, hard to debug, and vulnerable to security issues.


2/26/2008 10:23 PM | Scobie
Gravatar

# re: Passing an Array or Table Parameter to a Stored Procedure

Very interesting discussion. One thought that comes to mind is to pass a .Net object (ADO Table, array, collection, etc.) as a varbinary(MAX) and then use SQLCLR to cast the varbinary(MAX) object back to its original datatype and then pipe it into the database through SQLCLR or have SQLCLR pass it back to the stored procedure as a table so that you can use YSQL set operations on the data. The downside is that the invocation of SQLCLR would have a performance hit whn it spins up, but could offset the transformation cost of XML. It is another idea for solving this issue of impedence mismatching between SQL Server objects and .Net objects. This approach would only work with SQL Server 2005 and up.
4/28/2008 11:56 AM | Michael Rockwell
Gravatar

# re: Passing an Array or Table Parameter to a Stored Procedure (indirection)

Thank you for the article, and being a good sport about the complaints.

It’s easy to write code (in C# in our case) that iterates through a list of objects and calls an updating SP for each item. We are interested in bulk operations to reduce the time and network overhead of making many little calls instead of one big call. The above example has no affect on our problem. It just moves the work from updating the target table, to updating some other table that then is used to update our target table. With the exception of a slightly smaller delta time between transactions, what is the advantage of this indirection? Why not update the target table and be done with it?

Do you think XML a better choice for folks in our situation?

We already store quite a bit of XML data in SQL for our application. We have a huge variety of configuration data that fits XML better than native SQL values. So we probably need to come to terms with decoding XML in SPs for other reasons.
5/5/2008 10:32 AM | Ron Van Aken
Gravatar

# re: Passing an Array or Table Parameter to a Stored Procedure

I know is not the right solution, but I have to say it for the sake of completeness of the discussion: if the amount of items in your "array parameter" is limited (say, for example 5 or 10 items), you can always use optional parameters (i.e. assign them to null when declaring them in the SP), then insert them in a temp table or do whatever you want with them, without using dynamic,nor xml, nor string manipulation.

7/1/2008 12:00 PM | Juan Sagasti
Gravatar

# re: Passing an Array or Table Parameter to a Stored Procedure

Create proc aaa @table_name varchar(100)
as
begin
declare @sql
set @sql=select * from [@table_name]
Exec (@sql)
end

aaa tablename
9/18/2008 10:09 AM | kishore
Gravatar

# re: Passing an Array or Table Parameter to a Stored Procedure

So, what if I want to move items in a ListView in C# to an SQL stored procedure? How would you pass those parameters?
3/5/2009 1:24 AM | Student
Gravatar

# re: Passing an Array or Table Parameter to a Stored Procedure

Check out SQL Server 2008 Table-Valued Parameters

http://www.sqlteam.com/article/sql-server-2008-table-valued-parameters

3/19/2009 3:31 PM | gsdh
Gravatar

# re: Passing an Array or Table Parameter to a Stored Procedure

Thanks so much for this excellent, well-explained example. I ran into a need to pass a "table" of information to several child procedures, and this approach does a great job of handling the multi-user requirement.

Appreciate your effort to explain this, thanks again!
4/9/2010 9:18 AM | BigJim @ STL
Comments have been closed on this topic.

Powered by:
Powered By Subtext Powered By ASP.NET