I have a job that runs every evening that calls 2 stored
procedures. The first sp inserts new records from a
source database.table into a destination database.table
the second job updates the records that were changed at
the source and updates them at the destination.
This job has run successfully without problems for 2+
years. Now for the past two days the second step is
blowing up the transaction log to more than 7GB (it's
limit). Any idea what could cause this new behaviour?
Any suggestions as to what I need to look at would be
most appreciated.
Thanks,
Marc
Is the recovery model of the destination database changed recently? Are the
backup jobs running successfully?
HTH,
Vyas, MVP (SQL Server)
http://vyaskn.tripod.com/
Is .NET important for a database professional?
http://vyaskn.tripod.com/poll.htm
"Marc" <anonymous@.discussions.microsoft.com> wrote in message
news:244ad01c45f4f$3aef6880$a501280a@.phx.gbl...
I have a job that runs every evening that calls 2 stored
procedures. The first sp inserts new records from a
source database.table into a destination database.table
the second job updates the records that were changed at
the source and updates them at the destination.
This job has run successfully without problems for 2+
years. Now for the past two days the second step is
blowing up the transaction log to more than 7GB (it's
limit). Any idea what could cause this new behaviour?
Any suggestions as to what I need to look at would be
most appreciated.
Thanks,
Marc
|||Could be that the recovery model was changed from simple
to full, but I'm not 100% as this isn't my server.
The backups that are scheduled after this job runs fail
as there is no longer adequate disk space.
>--Original Message--
>Is the recovery model of the destination database
changed recently? Are the
>backup jobs running successfully?
>--
>HTH,
>Vyas, MVP (SQL Server)
>http://vyaskn.tripod.com/
>Is .NET important for a database professional?
>http://vyaskn.tripod.com/poll.htm
>
>"Marc" <anonymous@.discussions.microsoft.com> wrote in
message
>news:244ad01c45f4f$3aef6880$a501280a@.phx.gbl...
>I have a job that runs every evening that calls 2 stored
>procedures. The first sp inserts new records from a
>source database.table into a destination database.table
>the second job updates the records that were changed at
>the source and updates them at the destination.
>This job has run successfully without problems for 2+
>years. Now for the past two days the second step is
>blowing up the transaction log to more than 7GB (it's
>limit). Any idea what could cause this new behaviour?
>Any suggestions as to what I need to look at would be
>most appreciated.
>Thanks,
>Marc
>
>.
>
|||Hi,
Execute the below command to get the recovery model.
select databasepropertyex('dbname','recovery')
If the recovery model is FULL or BULK_LOGGED, you need to perform a
trasnaction log backup inbetween your back job.
Currently to reduce the trasnaction log file size you have backup the
transaction log shrink the file
Steps to shrink the Log file.
1. Backup the transaction log (Use command:- BACKUP Log dbname to
disk='c:\backup\dbname.tr1' (or use enterprise manager) or (if you do need
the trasaction log backup execute below command)
backup log <dbname> with truncate_only
2. Indetify the log file for shrinking:-
use <dbname>
go
sp_helpfile
Based on the name column for the transaction log file execute the dbcc
shrinkfile
3. Shrink the transaction log file.
DBCC SHRINKFILE('logical_transaction_log file name','truncateonly')
Have a look into the below article,
INF: Shrinking the Transaction Log in SQL Server 2000 with DBCC SHRINKFILE
http://support.microsoft.com/default...b;en-us;272318
Thanks
Hari
MCDBA
"Marc" <anonymous@.discussions.microsoft.com> wrote in message
news:2415501c45f52$18baf880$a301280a@.phx.gbl...[vbcol=seagreen]
> Could be that the recovery model was changed from simple
> to full, but I'm not 100% as this isn't my server.
> The backups that are scheduled after this job runs fail
> as there is no longer adequate disk space.
> changed recently? Are the
> message
|||Thanks for the info Hari, but I've done all that.
There are transaction log backups every two hours.
What I'm trying to resolve is the offending job that
unexplainably filling up the log to capacity every
evening and then subsequently failing out.
Thanks,
Marc
>--Original Message--
>Hi,
>Execute the below command to get the recovery model.
>select databasepropertyex('dbname','recovery')
>If the recovery model is FULL or BULK_LOGGED, you need
to perform a
>trasnaction log backup inbetween your back job.
>Currently to reduce the trasnaction log file size you
have backup the
>transaction log shrink the file
>Steps to shrink the Log file.
>1. Backup the transaction log (Use command:- BACKUP Log
dbname to
>disk='c:\backup\dbname.tr1' (or use enterprise
manager) or (if you do need
>the trasaction log backup execute below command)
> backup log <dbname> with truncate_only
>2. Indetify the log file for shrinking:-
> use <dbname>
> go
> sp_helpfile
>Based on the name column for the transaction log file
execute the dbcc
>shrinkfile
>3. Shrink the transaction log file.
> DBCC SHRINKFILE('logical_transaction_log file
name','truncateonly')
>
>Have a look into the below article,
>INF: Shrinking the Transaction Log in SQL Server 2000
with DBCC SHRINKFILE
>http://support.microsoft.com/default.aspx?scid=kb;en-
us;272318
>
>--
>Thanks
>Hari
>MCDBA
>"Marc" <anonymous@.discussions.microsoft.com> wrote in
message[vbcol=seagreen]
>news:2415501c45f52$18baf880$a301280a@.phx.gbl...
simple[vbcol=seagreen]
stored[vbcol=seagreen]
database.table[vbcol=seagreen]
at
>
>.
>
No comments:
Post a Comment