I just had a situation where I was creating a temp table in a stored proc. I
create the temp table and then do a select into it from a couple of other
tables. Problem is, we had changed the size of one of the field in the table
from varchar(250) to varchar(500). But the temp table didn't reflect that so
data was being truncated.
So the easiest way around this would be to make the field varchar(max)
instead of 500 so if and when that field grows later, I don't get an error
in the stored proc.
Question is - does using varchar(max) have any repurcusions? Could I just
use it everywhere and then not have to worry about this problem?
TIA - Jeff.
> So the easiest way around this would be to make the field varchar(max)
> instead of 500 so if and when that field grows later, I don't get an error
> in the stored proc.
Or maybe you can avoid using a temp table for this data?
Or maybe if you change the data type of the base column (which shouldn't
happen very often) you also change the other places it is referenced (e.g.
SP params, variable declarations, etc.)?
> Question is - does using varchar(max) have any repurcusions? Could I just
> use it everywhere and then not have to worry about this problem?
This is like trading in a sub-compact for a minivan because you don't like
the way the tennis racket fits on the seat.
I would strongly recommend only using MAX when you absolutely need to have >
4000 or 8000 characters. In this case, if you changed from 250 to 500 now,
you will probably change it again. I would say envision the largest # of
characters that column will ever need to hold, then double it, and fix it
everywhere once. This drastically reduces the likelihood you will have to
worry about it again.
|||> As an alternative, why not create your temp table from the underlying
> source columns - that way your temp table columns will always match
> the source columns.
That's a possible solution, and I don't know the op's requirements, but I
often opt for CREATE TABLE and then INSERT INTO, in case I need to have
additional columns, or in case I want to define indexes/keys/constraints
etc. BEFORE all of the data is in the destination table...
A
|||I would create the table that way but I actually am doing a couple of
different selects putting data in the table so I need to do insert intos.
<jhofmeyr@.googlemail.com> wrote in message
news:861f7e9e-1970-4538-99c8-e599c1847a0b@.f10g2000hsf.googlegroups.com...
> Hi Jeff,
> I would avoid using varchar(max) unless you actually need to store
> very long values (> 8000 chars)
> As an alternative, why not create your temp table from the underlying
> source columns - that way your temp table columns will always match
> the source columns.
> You can do this very easily by creating the table like this:
> SELECT <col list>
> INTO #tmp_table
> FROM <table list>
> WHERE 1 = 0
> Good luck!
> J
|||>I would create the table that way but I actually am doing a couple of
>different selects putting data in the table so I need to do insert intos.
But the very first one could be a select into, no? I assume that the source
table that drives that column would have the same data type as the column
that fills that data if you have source data from other tables, so all the
tables should be updated if you increase the size again...
A
sql
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment