Creating a Numberline Table
This script creates a table with a single column named "i" and populates it with a series of numbers from 0 to 100,000. You can adjust the upper bound to suit your needs.
I wrapped the script in a stored procedure so it can be saved inside the database.
This stored procedure and the [util].[numberline] are created in a schema named [util]. For use with SQL 2000 simply replace [util] with [dbo].
Schema's are a new feature in SQL 2005. They are a boon to database management. We no longer have to lump all of our database objects into the single global "dbo" namespace. I put all of my "utility" SQL in a schema named [util].
You can grant permissions via the schema. Permissions granted to the schema apply to any object within that schema. Below I grant the 'public' role 'select' and 'exec' permissions to any object in the [util] schema. 'public' is a special role to which every database user belongs. The 'public' role is similar to the 'Everyone' group in Windows.
create schema [util]
grant select, exec on schema::[util] to public
The script below inserts 100,000 rows in under 1 second on my laptop. There are two TSQL features in the stored procedure that significantly affect performance.
- set nocount on
- begin tran .... commit
The numberline table is populated with a simple loop "while @i < 100000". This means that 100,000 independent single-row "insert" statements are executed. If "set nocount" where left "off" (the default) then you would see 100,000 lines with "(1) rows affected" in SQL Management Studio. Generating that output (and displaying it on your screen) could take longer then inserting into the util.numberline table.
Combining the 100,000 inserts into a single "transaction" has an even more dramatic impact on performance. In the abscence of an explicit "begin tran"... "commit" statement TSQL would execute each individual "insert" in its own "implicit" transaction. That means 100,000 transactions! TSQL transactions --even simple ones-- require that they be committed to the hard disk to complete. The hard disk ensures that the transaction is "Durable" the "D" in ACID.
Without and explicit "begin tran"..."commit" it took over 44 seconds to insert the 100,000 rows on my laptop (versus < 1 second with transaction). This is because SQL Server had to make 100,000 disk writes to the log drive. When the same operation is wrapped in a single transaction a single disk write is performed containing all 100,000 rows.
create proc util.create_numberline_table
drop table util.numberline
create table util.numberline(
i int not null
,constraint pk__util_numberline primary key clustered(i)
set nocount on
declare @i int
while (@i < 100000)
insert into util.numberline values(@i)
set @i = @i+1