Hi,
I have a few thousand col1, col2 distinct values. Col1 -> some primary key and Col 2 -> date.
For a third col 3, i have to query a large table, which gives me only few hundred results in most cases.
now my concern is how can write my where condition or use unions so that the number of times i am querying my DB decreases. my program is slow due to this.
Currently I was doing of doing something like below in my perl program using DBI.
select COL3 from Table where (COL1='v1' and COL2='Sep 25 2007 12:00AM' ) or (COL1='b3' and COL2='Sep 28 2007 12:00AM')
or (COL1='c1' and COL2='Sep 11 2007 12:00AM') and COL3='ABCD'
union
select COL3 from Table where (COL1='v2' and COL2='Sep 28 2007 12:00AM') or (COL1='b2' and COL2='Oct 1 2007 12:00AM')
or (COL1='c2' and COL2='Sep 28 2007 12:00AM') and COL3='ABCD'
union
select COL3 from Table where (COL1='v3' and COL2='Oct 1 2007 12:00AM') or (COL1='b1' and COL2='Sep 28 2007 12:00AM')
or (COL1='c3' and COL2='Sep 24 2007 12:00AM') and COL3='ABCD'
-
For now I grouped things with count=25 removing union and I got good improvement in my program. Thanks, but if any better option is there, I am interested.
From awake416 -
A way of doing this is create a temporary table to hold all the col1 and col2 values you have. Insert the col1 and 2 values into the temp table and then do a query join between the temp table and Table (I am just doing for the first part of each or as the bitwith COL3='ABCD' will be similar
Parts of the code would be
create table #t ( COL1 char(2) not null, COL2 datetime not null )do the inserts
then
select col3 from Table inner join #t t on t.COL1 = Table.COL1 and t.COL2 = Table.COL2awake416 : Yeh that is a good Idea, Let me see if I can get some performance improvement.. ButWould it not create some extra burdon, to create table with few thousand rows. Again Thanks for your replyMark : I would suspect it would take less resources on the server as you are doing fewer selects and you are sending similar amounts of data to the server - either in code or in my solution as dataFrom Mark
0 comments:
Post a Comment