Sybase problems 3

1. I have written a query which you can see below. The query qpproximately takes 50- 60 SEC to run,
which I believe is way too high. The total rows returned is 10,000 .
Any idea to improve the query timings ?

The number of rows in each table are as follows :
ACCOUNTS : 1,580
PERSONNEL : 1,616
DIVISION : 15
LABOR_COMMITTED : 23,000
FISCAL_SYSTEM_PERIODS : 20
FISCAL_YEARS : 20

INDEXES :
ACCOUNTS TABLE- 1. ACCOUNT_ID ( Primary Key), 2. IN_AC_COMBINED(ACCOUNT_ID , ACCOUNT_MANAGER) LABOR_COMMITTED TABLE- 1.LABOR_COMMIT_ID ( Primary Key) 2. IN_LC_COMBINED(PERSONNEL_ID, ACCOUNT_ID) PERSONNEL TABLE- 1. PERSONNEL_ID ( Primary key)

The Actual Query :
SELECT AC.ACCOUNT_NUMBER , AC.ACCOUNT_NAME , AC.ACCOUNT_END_DATE , P1.NAME , D.DIVISION_NUMBER , P.SURNAME_ISI || ' '|| P.FIRST_NAME Person_Name, ISNULL(P.EMPLOYEE_ID,P.VISITOR_NUMBER) Employee_Number, CONVERT(DATETIME , FSP.FISCAL_SYSTEM_PERIOD_CAL_YR_EQ || ' ' || FY.FISCAL_YEAR) Month_Year , FSP.FISCAL_SYSTEM_PERIOD_CAL_YR_EQ || ' ' || FY.FISCAL_YEAR Month/Year, ROUND(LC.LCS_EFFORT,2) LCS_Effort , 'ACTUAL' AS 'Labor Type' , (SELECT CODE_VALUE FROM CODES CD WHERE CODE_ID = P.STAFF_CATEGORY_CODE_ID) STAFF_CAT FROM ACCOUNTS AC(INDEX IN_AC_COMBINED), PERSONNEL P1 ,DIVISIONS D, LABOR_COMMITED LC (index in_lc_combined) , PERSONNEL P , FISCAL_SYSTEM_PERIODS FSP , FISCAL_YEARS FY WHERE P1.PERSONNEL_ID IN (SELECT DISTINCT CHILD.PERSONNEL_ID FROM PERSONNEL CHILD, PERSONNEL PARENT WHERE PARENT.PERSONNEL_ID IN (7167,7041) AND CHILD.LFT BETWEEN PARENT.LFT AND PARENT.RGT AND PARENT.LFT <> 0 AND PARENT.RGT <> 0) AND AC.ACCOUNT_MANAGER = P1.PERSONNEL_ID AND P1.DIVISION_ID = D.DIVISION_ID AND AC.ACCOUNT_ID = LC.ACCOUNT_ID AND LC.PERSONNEL_ID = P.PERSONNEL_ID AND FSP.FISCAL_SYSTEM_PERIOD_ID = LC.FISCAL_PERIOD_ID AND FY.FISCAL_YEAR_ID = LC.FISCAL_YEAR_ID AND LC.COMMITED_DATE IS NOT NULL AND CONVERT(DATETIME,FISCAL_SYSTEM_PERIOD_CAL_YR_EQ || ' ' || FISCAL_YEAR) <= CONVERT(DATETIME,'January, 2007') AND CONVERT(DATETIME,FISCAL_SYSTEM_PERIOD_CAL_YR_EQ || ' ' || FISCAL_YEAR) >= CONVERT(DATETIME,'February, 2006')

----------------------------------------------------------------------------------------------------------------------------
2. I have a table with 10 million rows. about 9.5 million rows are identified by certain criteria must be deleted from this table.
If i use DELETE statment for this, optimizer takes nearly 15-20 minutes to execute the query,so how can i minimize the execution time?

------------------------------------------------------------------------------------------------------------------------------

3. I was working on a stored procedure for a pending QC . I need to negate the bskSymbol which have multiple consSymbol .

so for that i am using the query given below and saving that in a #table .

select bskSymbol from RSK_IM_ModifyBasket group by bskSymbol having count(*) > 1 and BatchId = 0

or Like ->

select into ... #temporary table
....
bskSymbol not in (select bskSymbol
from RSK_IM_ModifyBasket b
group by bskSymbol
having count(*) > 1 and BatchId = 0
)

does not produce any one to one mapping of bskSymbol and consSymbol .

where as the query given below shows some bskSymbol columns with one to one mapping of bskSymbol and consSymbol . by one to one mapping i mean for some bskSymbol we have count value as 1 .

select bskSymbol,count(consSymbol) from RSK_IM_ModifyBasket where BatchId = 0 group by bskSymbol

can u suggest where am i wrong ?.

---------------------------------------------------------------------------------------------------------------------------------------------------------
Hi friends,
I am facing a problem with bcp process on the following environment
OS - sun solaris
database - sybase

I have 13 tables in my database with max 8450782 rows in a table.
so I have write a script for unix, in which whole bcp process is done in one shot being a single process in background(bcp-out from source table , truncate on destination table,bcp-in ). in this way my all the 13 tables refreshes simultaniously.my tables have not any indexes,or other referential dependency.
some times this whole process slow down significantly.so is there any issue with multiple tables refreshing.
is my this approach is wrong or what i need to speed up the process.
any suggestion will be appriciated.

sol:

think you might be limited by your disk I/O speeds. But things you can use to try and speed up the process is a fifo special file (man mkfifo) as the bcp file and a larger network packet size (bcp -A 8704)
Would it be worth altering your unix script to do one table at a time. It might avoid the competition for resources. It would be good to time each stage so you can see where the issue is.

Do you drop the indexes on the destination table before bcping in, then create them again afterwards? hoping to use fast bcp.

Another option might be to try putting the 13 tables into their own database and simply dump the database at the source and restore it on the destination rather than bcping in and out each table.

-----------------------------------------------------------------------------------------------------------------------------------------------------------------------

Does anyone know of an easy way to merge Sybase users from 2 dataservers into one? Syslogins is different on both servers.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

No comments: