You can use system variables throughout SQL Server 2005 Integration Services (SSIS) packages. However, system variables reveal very useful package-related data, but they do not provide enough programming flexibility. Find out how you can define your own custom variables inside of SSIS packages, and how you can use these variables inside of your program flow.
This example will build upon the example in my article about system variables.
In this example, I will create a table named SalesImportBatch, which I will use to keep track of sales files that I want to import using SSIS. The importing of the actual data is not necessary for this example, so I will skip it. (For more information on how to import files using SSIS, read my column, “How to import an Excel file into SQL Server 2005 using Integration Services.”) What is important in terms of the scope of this article is the reading of the rows from the sales file. I will use that information to update the SalesImportBatch table.
First, I want to set up my batch table on my database server. To accomplish this, I execute the following script in SQL Server Management Studio.
CREATE TABLE SalesImportBatch
BatchID INT IDENTITY(1,1) PRIMARY KEY,
Figure A is a screenshot of the Execute SQL Task that I used in my article regarding system variables. This Execute SQL task pulls package-related information from system variables and inserts it into a SQL Server table.
Figure B is essentially the same script I used in my previous article to insert system variables into my PackageSystemAudit table. At the end of the script, I am catching the identity value of the new record I insert. Later I will assign this value to a user-defined variable in my SSIS package. Right now, this script returns a single-row resultset.
Here is listing of the the above script that you can copy into your SSIS package.
INSERT INTO PackageSystemAudit
@PackageIDSELECT CAST(SCOPE_IDENTITY() AS INT) AS PackageAuditID
If I right- click in the Control Flow screen of my SSIS package, I will see a Variables option from the menu. Figure C is a listing of the variables that are defined on my system. I have highlighted the user-defined variables that I have added and will use throughout this example.
Now I need to define how to assign this value from my ExecuteSQL statement to my user-defined variable (see Figure D).
I want to assign my one row result set to a parameter, so I will need to define it in the Result Set menu option. In Figure E, I assign my Result Name an arbitrary value of 0 and assign it to my PackageAuditID variable.
Figure F is a shot of my completed Execute SQL Task.
There are two important settings that I have not mentioned: ConnectionType and ResultSet. I changed ConnectionType from OLEDB to ADO.NET because ADO.NET allows me to use SQL parameters in the manner in which I am used to. Since I am returning the identity value from the newly inserted row in PackageSystemAudit, I need to set the ResultSet option to Direct Input. This allows me to assign this value to my PackageAuditID user-defined variable.
I want to check out how many records are in the sales file that I plan to import later. To do this, I will need to add a Data Flow task to my Control Flow menu (see Figure G). Data Flow tasks are used to extract and transform sets of data.
In the Data Flow screen, I added a Flat File Source task. I will use this task to point to the sales data file that I want to import to find out how many records are in the file (see Figure H). The red X at the right of the task indicates that the file connections are yet to be defined.
Here is a download of the file used in this example.
Figure I shows the properties I set for my Flat File Source task. All I need to do is set the properties on the General tab of the Properties menu. I specify the location of the file and how the file is delimited.
Once I have defined my text file I want to use for record counting, I need to add a transformation to see how many records are actually in the file (see Figure J). The Row Count task works great for this scenario. The Row Count task accepts a result set, calculates how many rows are in the result set, and lets you assign that count to a defined variable. To add this task, I drag the Row Count task onto my Data Flow screen.
To set the properties of the Row Count task, I double click the Row Count task to set the properties (see Figure K). Once the Properties window is open, it is very easy to assign the row count to a variable. All I need to add is the variable name I want use to the VariableName text box.
Now I want to insert into the SalesImportBatch table the user-defined variables that I just assigned values to. I will add a new Execute SQL task to my Control Flow screen and set the precedence to follow my Data Flow task (see Figure L). The properties that I will assign to this task are very similar to the Execute SQL task that I previously defined. I am still inserting variables into a database table, but I am not returning any data from my SQL statement to be used later. I am still using ADO.NET as my connection type, and I will need to map my user-defined variables to SQL parameters.
Below is a textual listing of the SQL statement I use to insert data into the SalesImportBatch table.
INSERT INTO SalesImportBatch
To map my user-defined variables to SQL parameters, I select the User:: variable I want from the drop-down list and assign a parameter name to map the variable to. Figure M is a screenshot of the completed mapping.
Figure N is the executed look of my completed SSIS package.
More to come
The example package doesn’t really do anything meaningful — it simply shows how you can assign data returned from SQL statements to user-defined variables and how you can used those variables in later SQL operations.
This is just the tip of the iceberg when it comes to the usefulness of user-defined variables. They can also be used in SQL scripts, to hold acutal SQL statements, for splitting data, and even for SQL Server administration operations. In future articles, I will will go into more detail about what you can do with SSIS packages.
Tim Chapman a SQL Server database administrator and consultant who works for a bank in Louisville, KY. Tim has more than eight years of IT experience, and he is a Microsoft certified Database Developer and Administrator. If you would like to contact Tim, please e-mail him at firstname.lastname@example.org.