We are on SQL 2005.
We have several jobs that get started as part of Triggers on inserts. The
job in question has a T-SQL step.
When the user who creates a record is part of the sysadmin role, everything
is fine. When the user is not part of that role, we get
The specified @.job_name ('the job name') does not exist.
How can we use a user that is not in the sysadmin role to run this job?
Thanks,
DougThis cannot be done directly. In SQL 2005 you could have the trigger post a
message to a Service Broker queue, and have a sysadmin level process
assigned to processing queue messages, thus using the queue to decouple
permissions, one of Service Broker's intended purposes.
Thank you,
Daniel Jameson
SQL Server DBA
Children's Oncology Group
www.childrensoncologygroup.org
"Doug" <Doug@.discussions.microsoft.com> wrote in message
news:4125E54B-E0E2-464D-9A2E-04A9A8BB501A@.microsoft.com...
> We are on SQL 2005.
> We have several jobs that get started as part of Triggers on inserts. The
> job in question has a T-SQL step.
> When the user who creates a record is part of the sysadmin role,
> everything
> is fine. When the user is not part of that role, we get
> The specified @.job_name ('the job name') does not exist.
> How can we use a user that is not in the sysadmin role to run this job?
> Thanks,
> Doug
>|||Daniel Jameson (djameson@.childrensoncologygroup.org) writes:
> This cannot be done directly. In SQL 2005 you could have the trigger
> post a message to a Service Broker queue, and have a sysadmin level
> process assigned to processing queue messages, thus using the queue to
> decouple permissions, one of Service Broker's intended purposes.
Yes, Service Broker is most certainly the way to do. Most likely the
Agent job would not be need at all then, but what needs to be done can
be performed by the activation procedure for the queue.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx
Showing posts with label inserts. Show all posts
Showing posts with label inserts. Show all posts
Friday, March 23, 2012
Monday, March 12, 2012
Job failure
I have a job that runs every evening that calls 2 stored
procedures. The first sp inserts new records from a
source database.table into a destination database.table
the second job updates the records that were changed at
the source and updates them at the destination.
This job has run successfully without problems for 2+
years. Now for the past two days the second step is
blowing up the transaction log to more than 7GB (it's
limit). Any idea what could cause this new behaviour?
Any suggestions as to what I need to look at would be
most appreciated.
Thanks,
Marc
Is the recovery model of the destination database changed recently? Are the
backup jobs running successfully?
HTH,
Vyas, MVP (SQL Server)
http://vyaskn.tripod.com/
Is .NET important for a database professional?
http://vyaskn.tripod.com/poll.htm
"Marc" <anonymous@.discussions.microsoft.com> wrote in message
news:244ad01c45f4f$3aef6880$a501280a@.phx.gbl...
I have a job that runs every evening that calls 2 stored
procedures. The first sp inserts new records from a
source database.table into a destination database.table
the second job updates the records that were changed at
the source and updates them at the destination.
This job has run successfully without problems for 2+
years. Now for the past two days the second step is
blowing up the transaction log to more than 7GB (it's
limit). Any idea what could cause this new behaviour?
Any suggestions as to what I need to look at would be
most appreciated.
Thanks,
Marc
|||Could be that the recovery model was changed from simple
to full, but I'm not 100% as this isn't my server.
The backups that are scheduled after this job runs fail
as there is no longer adequate disk space.
>--Original Message--
>Is the recovery model of the destination database
changed recently? Are the
>backup jobs running successfully?
>--
>HTH,
>Vyas, MVP (SQL Server)
>http://vyaskn.tripod.com/
>Is .NET important for a database professional?
>http://vyaskn.tripod.com/poll.htm
>
>"Marc" <anonymous@.discussions.microsoft.com> wrote in
message
>news:244ad01c45f4f$3aef6880$a501280a@.phx.gbl...
>I have a job that runs every evening that calls 2 stored
>procedures. The first sp inserts new records from a
>source database.table into a destination database.table
>the second job updates the records that were changed at
>the source and updates them at the destination.
>This job has run successfully without problems for 2+
>years. Now for the past two days the second step is
>blowing up the transaction log to more than 7GB (it's
>limit). Any idea what could cause this new behaviour?
>Any suggestions as to what I need to look at would be
>most appreciated.
>Thanks,
>Marc
>
>.
>
|||Hi,
Execute the below command to get the recovery model.
select databasepropertyex('dbname','recovery')
If the recovery model is FULL or BULK_LOGGED, you need to perform a
trasnaction log backup inbetween your back job.
Currently to reduce the trasnaction log file size you have backup the
transaction log shrink the file
Steps to shrink the Log file.
1. Backup the transaction log (Use command:- BACKUP Log dbname to
disk='c:\backup\dbname.tr1' (or use enterprise manager) or (if you do need
the trasaction log backup execute below command)
backup log <dbname> with truncate_only
2. Indetify the log file for shrinking:-
use <dbname>
go
sp_helpfile
Based on the name column for the transaction log file execute the dbcc
shrinkfile
3. Shrink the transaction log file.
DBCC SHRINKFILE('logical_transaction_log file name','truncateonly')
Have a look into the below article,
INF: Shrinking the Transaction Log in SQL Server 2000 with DBCC SHRINKFILE
http://support.microsoft.com/default...b;en-us;272318
Thanks
Hari
MCDBA
"Marc" <anonymous@.discussions.microsoft.com> wrote in message
news:2415501c45f52$18baf880$a301280a@.phx.gbl...[vbcol=seagreen]
> Could be that the recovery model was changed from simple
> to full, but I'm not 100% as this isn't my server.
> The backups that are scheduled after this job runs fail
> as there is no longer adequate disk space.
> changed recently? Are the
> message
|||Thanks for the info Hari, but I've done all that.
There are transaction log backups every two hours.
What I'm trying to resolve is the offending job that
unexplainably filling up the log to capacity every
evening and then subsequently failing out.
Thanks,
Marc
>--Original Message--
>Hi,
>Execute the below command to get the recovery model.
>select databasepropertyex('dbname','recovery')
>If the recovery model is FULL or BULK_LOGGED, you need
to perform a
>trasnaction log backup inbetween your back job.
>Currently to reduce the trasnaction log file size you
have backup the
>transaction log shrink the file
>Steps to shrink the Log file.
>1. Backup the transaction log (Use command:- BACKUP Log
dbname to
>disk='c:\backup\dbname.tr1' (or use enterprise
manager) or (if you do need
>the trasaction log backup execute below command)
> backup log <dbname> with truncate_only
>2. Indetify the log file for shrinking:-
> use <dbname>
> go
> sp_helpfile
>Based on the name column for the transaction log file
execute the dbcc
>shrinkfile
>3. Shrink the transaction log file.
> DBCC SHRINKFILE('logical_transaction_log file
name','truncateonly')
>
>Have a look into the below article,
>INF: Shrinking the Transaction Log in SQL Server 2000
with DBCC SHRINKFILE
>http://support.microsoft.com/default.aspx?scid=kb;en-
us;272318
>
>--
>Thanks
>Hari
>MCDBA
>"Marc" <anonymous@.discussions.microsoft.com> wrote in
message[vbcol=seagreen]
>news:2415501c45f52$18baf880$a301280a@.phx.gbl...
simple[vbcol=seagreen]
stored[vbcol=seagreen]
database.table[vbcol=seagreen]
at
>
>.
>
procedures. The first sp inserts new records from a
source database.table into a destination database.table
the second job updates the records that were changed at
the source and updates them at the destination.
This job has run successfully without problems for 2+
years. Now for the past two days the second step is
blowing up the transaction log to more than 7GB (it's
limit). Any idea what could cause this new behaviour?
Any suggestions as to what I need to look at would be
most appreciated.
Thanks,
Marc
Is the recovery model of the destination database changed recently? Are the
backup jobs running successfully?
HTH,
Vyas, MVP (SQL Server)
http://vyaskn.tripod.com/
Is .NET important for a database professional?
http://vyaskn.tripod.com/poll.htm
"Marc" <anonymous@.discussions.microsoft.com> wrote in message
news:244ad01c45f4f$3aef6880$a501280a@.phx.gbl...
I have a job that runs every evening that calls 2 stored
procedures. The first sp inserts new records from a
source database.table into a destination database.table
the second job updates the records that were changed at
the source and updates them at the destination.
This job has run successfully without problems for 2+
years. Now for the past two days the second step is
blowing up the transaction log to more than 7GB (it's
limit). Any idea what could cause this new behaviour?
Any suggestions as to what I need to look at would be
most appreciated.
Thanks,
Marc
|||Could be that the recovery model was changed from simple
to full, but I'm not 100% as this isn't my server.
The backups that are scheduled after this job runs fail
as there is no longer adequate disk space.
>--Original Message--
>Is the recovery model of the destination database
changed recently? Are the
>backup jobs running successfully?
>--
>HTH,
>Vyas, MVP (SQL Server)
>http://vyaskn.tripod.com/
>Is .NET important for a database professional?
>http://vyaskn.tripod.com/poll.htm
>
>"Marc" <anonymous@.discussions.microsoft.com> wrote in
message
>news:244ad01c45f4f$3aef6880$a501280a@.phx.gbl...
>I have a job that runs every evening that calls 2 stored
>procedures. The first sp inserts new records from a
>source database.table into a destination database.table
>the second job updates the records that were changed at
>the source and updates them at the destination.
>This job has run successfully without problems for 2+
>years. Now for the past two days the second step is
>blowing up the transaction log to more than 7GB (it's
>limit). Any idea what could cause this new behaviour?
>Any suggestions as to what I need to look at would be
>most appreciated.
>Thanks,
>Marc
>
>.
>
|||Hi,
Execute the below command to get the recovery model.
select databasepropertyex('dbname','recovery')
If the recovery model is FULL or BULK_LOGGED, you need to perform a
trasnaction log backup inbetween your back job.
Currently to reduce the trasnaction log file size you have backup the
transaction log shrink the file
Steps to shrink the Log file.
1. Backup the transaction log (Use command:- BACKUP Log dbname to
disk='c:\backup\dbname.tr1' (or use enterprise manager) or (if you do need
the trasaction log backup execute below command)
backup log <dbname> with truncate_only
2. Indetify the log file for shrinking:-
use <dbname>
go
sp_helpfile
Based on the name column for the transaction log file execute the dbcc
shrinkfile
3. Shrink the transaction log file.
DBCC SHRINKFILE('logical_transaction_log file name','truncateonly')
Have a look into the below article,
INF: Shrinking the Transaction Log in SQL Server 2000 with DBCC SHRINKFILE
http://support.microsoft.com/default...b;en-us;272318
Thanks
Hari
MCDBA
"Marc" <anonymous@.discussions.microsoft.com> wrote in message
news:2415501c45f52$18baf880$a301280a@.phx.gbl...[vbcol=seagreen]
> Could be that the recovery model was changed from simple
> to full, but I'm not 100% as this isn't my server.
> The backups that are scheduled after this job runs fail
> as there is no longer adequate disk space.
> changed recently? Are the
> message
|||Thanks for the info Hari, but I've done all that.
There are transaction log backups every two hours.
What I'm trying to resolve is the offending job that
unexplainably filling up the log to capacity every
evening and then subsequently failing out.
Thanks,
Marc
>--Original Message--
>Hi,
>Execute the below command to get the recovery model.
>select databasepropertyex('dbname','recovery')
>If the recovery model is FULL or BULK_LOGGED, you need
to perform a
>trasnaction log backup inbetween your back job.
>Currently to reduce the trasnaction log file size you
have backup the
>transaction log shrink the file
>Steps to shrink the Log file.
>1. Backup the transaction log (Use command:- BACKUP Log
dbname to
>disk='c:\backup\dbname.tr1' (or use enterprise
manager) or (if you do need
>the trasaction log backup execute below command)
> backup log <dbname> with truncate_only
>2. Indetify the log file for shrinking:-
> use <dbname>
> go
> sp_helpfile
>Based on the name column for the transaction log file
execute the dbcc
>shrinkfile
>3. Shrink the transaction log file.
> DBCC SHRINKFILE('logical_transaction_log file
name','truncateonly')
>
>Have a look into the below article,
>INF: Shrinking the Transaction Log in SQL Server 2000
with DBCC SHRINKFILE
>http://support.microsoft.com/default.aspx?scid=kb;en-
us;272318
>
>--
>Thanks
>Hari
>MCDBA
>"Marc" <anonymous@.discussions.microsoft.com> wrote in
message[vbcol=seagreen]
>news:2415501c45f52$18baf880$a301280a@.phx.gbl...
simple[vbcol=seagreen]
stored[vbcol=seagreen]
database.table[vbcol=seagreen]
at
>
>.
>
Job failure
I have a job that runs every evening that calls 2 stored
procedures. The first sp inserts new records from a
source database.table into a destination database.table
the second job updates the records that were changed at
the source and updates them at the destination.
This job has run successfully without problems for 2+
years. Now for the past two days the second step is
blowing up the transaction log to more than 7GB (it's
limit). Any idea what could cause this new behaviour?
Any suggestions as to what I need to look at would be
most appreciated.
Thanks,
MarcIs the recovery model of the destination database changed recently? Are the
backup jobs running successfully?
--
HTH,
Vyas, MVP (SQL Server)
http://vyaskn.tripod.com/
Is .NET important for a database professional?
http://vyaskn.tripod.com/poll.htm
"Marc" <anonymous@.discussions.microsoft.com> wrote in message
news:244ad01c45f4f$3aef6880$a501280a@.phx
.gbl...
I have a job that runs every evening that calls 2 stored
procedures. The first sp inserts new records from a
source database.table into a destination database.table
the second job updates the records that were changed at
the source and updates them at the destination.
This job has run successfully without problems for 2+
years. Now for the past two days the second step is
blowing up the transaction log to more than 7GB (it's
limit). Any idea what could cause this new behaviour?
Any suggestions as to what I need to look at would be
most appreciated.
Thanks,
Marc|||Could be that the recovery model was changed from simple
to full, but I'm not 100% as this isn't my server.
The backups that are scheduled after this job runs fail
as there is no longer adequate disk space.
>--Original Message--
>Is the recovery model of the destination database
changed recently? Are the
>backup jobs running successfully?
>--
>HTH,
>Vyas, MVP (SQL Server)
>http://vyaskn.tripod.com/
>Is .NET important for a database professional?
>http://vyaskn.tripod.com/poll.htm
>
>"Marc" <anonymous@.discussions.microsoft.com> wrote in
message
> news:244ad01c45f4f$3aef6880$a501280a@.phx
.gbl...
>I have a job that runs every evening that calls 2 stored
>procedures. The first sp inserts new records from a
>source database.table into a destination database.table
>the second job updates the records that were changed at
>the source and updates them at the destination.
>This job has run successfully without problems for 2+
>years. Now for the past two days the second step is
>blowing up the transaction log to more than 7GB (it's
>limit). Any idea what could cause this new behaviour?
>Any suggestions as to what I need to look at would be
>most appreciated.
>Thanks,
>Marc
>
>.
>|||Hi,
Execute the below command to get the recovery model.
select databasepropertyex('dbname','recovery')
If the recovery model is FULL or BULK_LOGGED, you need to perform a
trasnaction log backup inbetween your back job.
Currently to reduce the trasnaction log file size you have backup the
transaction log shrink the file
Steps to shrink the Log file.
1. Backup the transaction log (Use command:- BACKUP Log dbname to
disk='c:\backup\dbname.tr1' (or use enterprise manager) or (if you do need
the trasaction log backup execute below command)
backup log <dbname> with truncate_only
2. Indetify the log file for shrinking:-
use <dbname>
go
sp_helpfile
Based on the name column for the transaction log file execute the dbcc
shrinkfile
3. Shrink the transaction log file.
DBCC SHRINKFILE('logical_transaction_log file name','truncateonly')
Have a look into the below article,
INF: Shrinking the Transaction Log in SQL Server 2000 with DBCC SHRINKFILE
http://support.microsoft.com/defaul...kb;en-us;272318
Thanks
Hari
MCDBA
"Marc" <anonymous@.discussions.microsoft.com> wrote in message
news:2415501c45f52$18baf880$a301280a@.phx
.gbl...[vbcol=seagreen]
> Could be that the recovery model was changed from simple
> to full, but I'm not 100% as this isn't my server.
> The backups that are scheduled after this job runs fail
> as there is no longer adequate disk space.
>
> changed recently? Are the
> message|||Thanks for the info Hari, but I've done all that.
There are transaction log backups every two hours.
What I'm trying to resolve is the offending job that
unexplainably filling up the log to capacity every
evening and then subsequently failing out.
Thanks,
Marc
>--Original Message--
>Hi,
>Execute the below command to get the recovery model.
>select databasepropertyex('dbname','recovery')
>If the recovery model is FULL or BULK_LOGGED, you need
to perform a
>trasnaction log backup inbetween your back job.
>Currently to reduce the trasnaction log file size you
have backup the
>transaction log shrink the file
>Steps to shrink the Log file.
>1. Backup the transaction log (Use command:- BACKUP Log
dbname to
>disk='c:\backup\dbname.tr1' (or use enterprise
manager) or (if you do need
>the trasaction log backup execute below command)
> backup log <dbname> with truncate_only
>2. Indetify the log file for shrinking:-
> use <dbname>
> go
> sp_helpfile
>Based on the name column for the transaction log file
execute the dbcc
>shrinkfile
>3. Shrink the transaction log file.
> DBCC SHRINKFILE('logical_transaction_log file
name','truncateonly')
>
>Have a look into the below article,
>INF: Shrinking the Transaction Log in SQL Server 2000
with DBCC SHRINKFILE
>http://support.microsoft.com/default.aspx?scid=kb;en-
us;272318
>
>--
>Thanks
>Hari
>MCDBA
>"Marc" <anonymous@.discussions.microsoft.com> wrote in
message
> news:2415501c45f52$18baf880$a301280a@.phx
.gbl...
simple[vbcol=seagreen]
stored[vbcol=seagreen]
database.table[vbcol=seagreen]
at[vbcol=seagreen]
>
>.
>
procedures. The first sp inserts new records from a
source database.table into a destination database.table
the second job updates the records that were changed at
the source and updates them at the destination.
This job has run successfully without problems for 2+
years. Now for the past two days the second step is
blowing up the transaction log to more than 7GB (it's
limit). Any idea what could cause this new behaviour?
Any suggestions as to what I need to look at would be
most appreciated.
Thanks,
MarcIs the recovery model of the destination database changed recently? Are the
backup jobs running successfully?
--
HTH,
Vyas, MVP (SQL Server)
http://vyaskn.tripod.com/
Is .NET important for a database professional?
http://vyaskn.tripod.com/poll.htm
"Marc" <anonymous@.discussions.microsoft.com> wrote in message
news:244ad01c45f4f$3aef6880$a501280a@.phx
.gbl...
I have a job that runs every evening that calls 2 stored
procedures. The first sp inserts new records from a
source database.table into a destination database.table
the second job updates the records that were changed at
the source and updates them at the destination.
This job has run successfully without problems for 2+
years. Now for the past two days the second step is
blowing up the transaction log to more than 7GB (it's
limit). Any idea what could cause this new behaviour?
Any suggestions as to what I need to look at would be
most appreciated.
Thanks,
Marc|||Could be that the recovery model was changed from simple
to full, but I'm not 100% as this isn't my server.
The backups that are scheduled after this job runs fail
as there is no longer adequate disk space.
>--Original Message--
>Is the recovery model of the destination database
changed recently? Are the
>backup jobs running successfully?
>--
>HTH,
>Vyas, MVP (SQL Server)
>http://vyaskn.tripod.com/
>Is .NET important for a database professional?
>http://vyaskn.tripod.com/poll.htm
>
>"Marc" <anonymous@.discussions.microsoft.com> wrote in
message
> news:244ad01c45f4f$3aef6880$a501280a@.phx
.gbl...
>I have a job that runs every evening that calls 2 stored
>procedures. The first sp inserts new records from a
>source database.table into a destination database.table
>the second job updates the records that were changed at
>the source and updates them at the destination.
>This job has run successfully without problems for 2+
>years. Now for the past two days the second step is
>blowing up the transaction log to more than 7GB (it's
>limit). Any idea what could cause this new behaviour?
>Any suggestions as to what I need to look at would be
>most appreciated.
>Thanks,
>Marc
>
>.
>|||Hi,
Execute the below command to get the recovery model.
select databasepropertyex('dbname','recovery')
If the recovery model is FULL or BULK_LOGGED, you need to perform a
trasnaction log backup inbetween your back job.
Currently to reduce the trasnaction log file size you have backup the
transaction log shrink the file
Steps to shrink the Log file.
1. Backup the transaction log (Use command:- BACKUP Log dbname to
disk='c:\backup\dbname.tr1' (or use enterprise manager) or (if you do need
the trasaction log backup execute below command)
backup log <dbname> with truncate_only
2. Indetify the log file for shrinking:-
use <dbname>
go
sp_helpfile
Based on the name column for the transaction log file execute the dbcc
shrinkfile
3. Shrink the transaction log file.
DBCC SHRINKFILE('logical_transaction_log file name','truncateonly')
Have a look into the below article,
INF: Shrinking the Transaction Log in SQL Server 2000 with DBCC SHRINKFILE
http://support.microsoft.com/defaul...kb;en-us;272318
Thanks
Hari
MCDBA
"Marc" <anonymous@.discussions.microsoft.com> wrote in message
news:2415501c45f52$18baf880$a301280a@.phx
.gbl...[vbcol=seagreen]
> Could be that the recovery model was changed from simple
> to full, but I'm not 100% as this isn't my server.
> The backups that are scheduled after this job runs fail
> as there is no longer adequate disk space.
>
> changed recently? Are the
> message|||Thanks for the info Hari, but I've done all that.
There are transaction log backups every two hours.
What I'm trying to resolve is the offending job that
unexplainably filling up the log to capacity every
evening and then subsequently failing out.
Thanks,
Marc
>--Original Message--
>Hi,
>Execute the below command to get the recovery model.
>select databasepropertyex('dbname','recovery')
>If the recovery model is FULL or BULK_LOGGED, you need
to perform a
>trasnaction log backup inbetween your back job.
>Currently to reduce the trasnaction log file size you
have backup the
>transaction log shrink the file
>Steps to shrink the Log file.
>1. Backup the transaction log (Use command:- BACKUP Log
dbname to
>disk='c:\backup\dbname.tr1' (or use enterprise
manager) or (if you do need
>the trasaction log backup execute below command)
> backup log <dbname> with truncate_only
>2. Indetify the log file for shrinking:-
> use <dbname>
> go
> sp_helpfile
>Based on the name column for the transaction log file
execute the dbcc
>shrinkfile
>3. Shrink the transaction log file.
> DBCC SHRINKFILE('logical_transaction_log file
name','truncateonly')
>
>Have a look into the below article,
>INF: Shrinking the Transaction Log in SQL Server 2000
with DBCC SHRINKFILE
>http://support.microsoft.com/default.aspx?scid=kb;en-
us;272318
>
>--
>Thanks
>Hari
>MCDBA
>"Marc" <anonymous@.discussions.microsoft.com> wrote in
message
> news:2415501c45f52$18baf880$a301280a@.phx
.gbl...
simple[vbcol=seagreen]
stored[vbcol=seagreen]
database.table[vbcol=seagreen]
at[vbcol=seagreen]
>
>.
>
Job failure
I have a job that runs every evening that calls 2 stored
procedures. The first sp inserts new records from a
source database.table into a destination database.table
the second job updates the records that were changed at
the source and updates them at the destination.
This job has run successfully without problems for 2+
years. Now for the past two days the second step is
blowing up the transaction log to more than 7GB (it's
limit). Any idea what could cause this new behaviour?
Any suggestions as to what I need to look at would be
most appreciated.
Thanks,
MarcIs the recovery model of the destination database changed recently? Are the
backup jobs running successfully?
--
HTH,
Vyas, MVP (SQL Server)
http://vyaskn.tripod.com/
Is .NET important for a database professional?
http://vyaskn.tripod.com/poll.htm
"Marc" <anonymous@.discussions.microsoft.com> wrote in message
news:244ad01c45f4f$3aef6880$a501280a@.phx.gbl...
I have a job that runs every evening that calls 2 stored
procedures. The first sp inserts new records from a
source database.table into a destination database.table
the second job updates the records that were changed at
the source and updates them at the destination.
This job has run successfully without problems for 2+
years. Now for the past two days the second step is
blowing up the transaction log to more than 7GB (it's
limit). Any idea what could cause this new behaviour?
Any suggestions as to what I need to look at would be
most appreciated.
Thanks,
Marc|||Could be that the recovery model was changed from simple
to full, but I'm not 100% as this isn't my server.
The backups that are scheduled after this job runs fail
as there is no longer adequate disk space.
>--Original Message--
>Is the recovery model of the destination database
changed recently? Are the
>backup jobs running successfully?
>--
>HTH,
>Vyas, MVP (SQL Server)
>http://vyaskn.tripod.com/
>Is .NET important for a database professional?
>http://vyaskn.tripod.com/poll.htm
>
>"Marc" <anonymous@.discussions.microsoft.com> wrote in
message
>news:244ad01c45f4f$3aef6880$a501280a@.phx.gbl...
>I have a job that runs every evening that calls 2 stored
>procedures. The first sp inserts new records from a
>source database.table into a destination database.table
>the second job updates the records that were changed at
>the source and updates them at the destination.
>This job has run successfully without problems for 2+
>years. Now for the past two days the second step is
>blowing up the transaction log to more than 7GB (it's
>limit). Any idea what could cause this new behaviour?
>Any suggestions as to what I need to look at would be
>most appreciated.
>Thanks,
>Marc
>
>.
>|||Hi,
Execute the below command to get the recovery model.
select databasepropertyex('dbname','recovery')
If the recovery model is FULL or BULK_LOGGED, you need to perform a
trasnaction log backup inbetween your back job.
Currently to reduce the trasnaction log file size you have backup the
transaction log shrink the file
Steps to shrink the Log file.
1. Backup the transaction log (Use command:- BACKUP Log dbname to
disk='c:\backup\dbname.tr1' (or use enterprise manager) or (if you do need
the trasaction log backup execute below command)
backup log <dbname> with truncate_only
2. Indetify the log file for shrinking:-
use <dbname>
go
sp_helpfile
Based on the name column for the transaction log file execute the dbcc
shrinkfile
3. Shrink the transaction log file.
DBCC SHRINKFILE('logical_transaction_log file name','truncateonly')
Have a look into the below article,
INF: Shrinking the Transaction Log in SQL Server 2000 with DBCC SHRINKFILE
http://support.microsoft.com/default.aspx?scid=kb;en-us;272318
Thanks
Hari
MCDBA
"Marc" <anonymous@.discussions.microsoft.com> wrote in message
news:2415501c45f52$18baf880$a301280a@.phx.gbl...
> Could be that the recovery model was changed from simple
> to full, but I'm not 100% as this isn't my server.
> The backups that are scheduled after this job runs fail
> as there is no longer adequate disk space.
> >--Original Message--
> >Is the recovery model of the destination database
> changed recently? Are the
> >backup jobs running successfully?
> >--
> >HTH,
> >Vyas, MVP (SQL Server)
> >http://vyaskn.tripod.com/
> >Is .NET important for a database professional?
> >http://vyaskn.tripod.com/poll.htm
> >
> >
> >"Marc" <anonymous@.discussions.microsoft.com> wrote in
> message
> >news:244ad01c45f4f$3aef6880$a501280a@.phx.gbl...
> >I have a job that runs every evening that calls 2 stored
> >procedures. The first sp inserts new records from a
> >source database.table into a destination database.table
> >the second job updates the records that were changed at
> >the source and updates them at the destination.
> >
> >This job has run successfully without problems for 2+
> >years. Now for the past two days the second step is
> >blowing up the transaction log to more than 7GB (it's
> >limit). Any idea what could cause this new behaviour?
> >
> >Any suggestions as to what I need to look at would be
> >most appreciated.
> >
> >Thanks,
> >Marc
> >
> >
> >.
> >|||Thanks for the info Hari, but I've done all that.
There are transaction log backups every two hours.
What I'm trying to resolve is the offending job that
unexplainably filling up the log to capacity every
evening and then subsequently failing out.
Thanks,
Marc
>--Original Message--
>Hi,
>Execute the below command to get the recovery model.
>select databasepropertyex('dbname','recovery')
>If the recovery model is FULL or BULK_LOGGED, you need
to perform a
>trasnaction log backup inbetween your back job.
>Currently to reduce the trasnaction log file size you
have backup the
>transaction log shrink the file
>Steps to shrink the Log file.
>1. Backup the transaction log (Use command:- BACKUP Log
dbname to
>disk='c:\backup\dbname.tr1' (or use enterprise
manager) or (if you do need
>the trasaction log backup execute below command)
> backup log <dbname> with truncate_only
>2. Indetify the log file for shrinking:-
> use <dbname>
> go
> sp_helpfile
>Based on the name column for the transaction log file
execute the dbcc
>shrinkfile
>3. Shrink the transaction log file.
> DBCC SHRINKFILE('logical_transaction_log file
name','truncateonly')
>
>Have a look into the below article,
>INF: Shrinking the Transaction Log in SQL Server 2000
with DBCC SHRINKFILE
>http://support.microsoft.com/default.aspx?scid=kb;en-
us;272318
>
>--
>Thanks
>Hari
>MCDBA
>"Marc" <anonymous@.discussions.microsoft.com> wrote in
message
>news:2415501c45f52$18baf880$a301280a@.phx.gbl...
>> Could be that the recovery model was changed from
simple
>> to full, but I'm not 100% as this isn't my server.
>> The backups that are scheduled after this job runs fail
>> as there is no longer adequate disk space.
>> >--Original Message--
>> >Is the recovery model of the destination database
>> changed recently? Are the
>> >backup jobs running successfully?
>> >--
>> >HTH,
>> >Vyas, MVP (SQL Server)
>> >http://vyaskn.tripod.com/
>> >Is .NET important for a database professional?
>> >http://vyaskn.tripod.com/poll.htm
>> >
>> >
>> >"Marc" <anonymous@.discussions.microsoft.com> wrote in
>> message
>> >news:244ad01c45f4f$3aef6880$a501280a@.phx.gbl...
>> >I have a job that runs every evening that calls 2
stored
>> >procedures. The first sp inserts new records from a
>> >source database.table into a destination
database.table
>> >the second job updates the records that were changed
at
>> >the source and updates them at the destination.
>> >
>> >This job has run successfully without problems for 2+
>> >years. Now for the past two days the second step is
>> >blowing up the transaction log to more than 7GB (it's
>> >limit). Any idea what could cause this new behaviour?
>> >
>> >Any suggestions as to what I need to look at would be
>> >most appreciated.
>> >
>> >Thanks,
>> >Marc
>> >
>> >
>> >.
>> >
>
>.
>
procedures. The first sp inserts new records from a
source database.table into a destination database.table
the second job updates the records that were changed at
the source and updates them at the destination.
This job has run successfully without problems for 2+
years. Now for the past two days the second step is
blowing up the transaction log to more than 7GB (it's
limit). Any idea what could cause this new behaviour?
Any suggestions as to what I need to look at would be
most appreciated.
Thanks,
MarcIs the recovery model of the destination database changed recently? Are the
backup jobs running successfully?
--
HTH,
Vyas, MVP (SQL Server)
http://vyaskn.tripod.com/
Is .NET important for a database professional?
http://vyaskn.tripod.com/poll.htm
"Marc" <anonymous@.discussions.microsoft.com> wrote in message
news:244ad01c45f4f$3aef6880$a501280a@.phx.gbl...
I have a job that runs every evening that calls 2 stored
procedures. The first sp inserts new records from a
source database.table into a destination database.table
the second job updates the records that were changed at
the source and updates them at the destination.
This job has run successfully without problems for 2+
years. Now for the past two days the second step is
blowing up the transaction log to more than 7GB (it's
limit). Any idea what could cause this new behaviour?
Any suggestions as to what I need to look at would be
most appreciated.
Thanks,
Marc|||Could be that the recovery model was changed from simple
to full, but I'm not 100% as this isn't my server.
The backups that are scheduled after this job runs fail
as there is no longer adequate disk space.
>--Original Message--
>Is the recovery model of the destination database
changed recently? Are the
>backup jobs running successfully?
>--
>HTH,
>Vyas, MVP (SQL Server)
>http://vyaskn.tripod.com/
>Is .NET important for a database professional?
>http://vyaskn.tripod.com/poll.htm
>
>"Marc" <anonymous@.discussions.microsoft.com> wrote in
message
>news:244ad01c45f4f$3aef6880$a501280a@.phx.gbl...
>I have a job that runs every evening that calls 2 stored
>procedures. The first sp inserts new records from a
>source database.table into a destination database.table
>the second job updates the records that were changed at
>the source and updates them at the destination.
>This job has run successfully without problems for 2+
>years. Now for the past two days the second step is
>blowing up the transaction log to more than 7GB (it's
>limit). Any idea what could cause this new behaviour?
>Any suggestions as to what I need to look at would be
>most appreciated.
>Thanks,
>Marc
>
>.
>|||Hi,
Execute the below command to get the recovery model.
select databasepropertyex('dbname','recovery')
If the recovery model is FULL or BULK_LOGGED, you need to perform a
trasnaction log backup inbetween your back job.
Currently to reduce the trasnaction log file size you have backup the
transaction log shrink the file
Steps to shrink the Log file.
1. Backup the transaction log (Use command:- BACKUP Log dbname to
disk='c:\backup\dbname.tr1' (or use enterprise manager) or (if you do need
the trasaction log backup execute below command)
backup log <dbname> with truncate_only
2. Indetify the log file for shrinking:-
use <dbname>
go
sp_helpfile
Based on the name column for the transaction log file execute the dbcc
shrinkfile
3. Shrink the transaction log file.
DBCC SHRINKFILE('logical_transaction_log file name','truncateonly')
Have a look into the below article,
INF: Shrinking the Transaction Log in SQL Server 2000 with DBCC SHRINKFILE
http://support.microsoft.com/default.aspx?scid=kb;en-us;272318
Thanks
Hari
MCDBA
"Marc" <anonymous@.discussions.microsoft.com> wrote in message
news:2415501c45f52$18baf880$a301280a@.phx.gbl...
> Could be that the recovery model was changed from simple
> to full, but I'm not 100% as this isn't my server.
> The backups that are scheduled after this job runs fail
> as there is no longer adequate disk space.
> >--Original Message--
> >Is the recovery model of the destination database
> changed recently? Are the
> >backup jobs running successfully?
> >--
> >HTH,
> >Vyas, MVP (SQL Server)
> >http://vyaskn.tripod.com/
> >Is .NET important for a database professional?
> >http://vyaskn.tripod.com/poll.htm
> >
> >
> >"Marc" <anonymous@.discussions.microsoft.com> wrote in
> message
> >news:244ad01c45f4f$3aef6880$a501280a@.phx.gbl...
> >I have a job that runs every evening that calls 2 stored
> >procedures. The first sp inserts new records from a
> >source database.table into a destination database.table
> >the second job updates the records that were changed at
> >the source and updates them at the destination.
> >
> >This job has run successfully without problems for 2+
> >years. Now for the past two days the second step is
> >blowing up the transaction log to more than 7GB (it's
> >limit). Any idea what could cause this new behaviour?
> >
> >Any suggestions as to what I need to look at would be
> >most appreciated.
> >
> >Thanks,
> >Marc
> >
> >
> >.
> >|||Thanks for the info Hari, but I've done all that.
There are transaction log backups every two hours.
What I'm trying to resolve is the offending job that
unexplainably filling up the log to capacity every
evening and then subsequently failing out.
Thanks,
Marc
>--Original Message--
>Hi,
>Execute the below command to get the recovery model.
>select databasepropertyex('dbname','recovery')
>If the recovery model is FULL or BULK_LOGGED, you need
to perform a
>trasnaction log backup inbetween your back job.
>Currently to reduce the trasnaction log file size you
have backup the
>transaction log shrink the file
>Steps to shrink the Log file.
>1. Backup the transaction log (Use command:- BACKUP Log
dbname to
>disk='c:\backup\dbname.tr1' (or use enterprise
manager) or (if you do need
>the trasaction log backup execute below command)
> backup log <dbname> with truncate_only
>2. Indetify the log file for shrinking:-
> use <dbname>
> go
> sp_helpfile
>Based on the name column for the transaction log file
execute the dbcc
>shrinkfile
>3. Shrink the transaction log file.
> DBCC SHRINKFILE('logical_transaction_log file
name','truncateonly')
>
>Have a look into the below article,
>INF: Shrinking the Transaction Log in SQL Server 2000
with DBCC SHRINKFILE
>http://support.microsoft.com/default.aspx?scid=kb;en-
us;272318
>
>--
>Thanks
>Hari
>MCDBA
>"Marc" <anonymous@.discussions.microsoft.com> wrote in
message
>news:2415501c45f52$18baf880$a301280a@.phx.gbl...
>> Could be that the recovery model was changed from
simple
>> to full, but I'm not 100% as this isn't my server.
>> The backups that are scheduled after this job runs fail
>> as there is no longer adequate disk space.
>> >--Original Message--
>> >Is the recovery model of the destination database
>> changed recently? Are the
>> >backup jobs running successfully?
>> >--
>> >HTH,
>> >Vyas, MVP (SQL Server)
>> >http://vyaskn.tripod.com/
>> >Is .NET important for a database professional?
>> >http://vyaskn.tripod.com/poll.htm
>> >
>> >
>> >"Marc" <anonymous@.discussions.microsoft.com> wrote in
>> message
>> >news:244ad01c45f4f$3aef6880$a501280a@.phx.gbl...
>> >I have a job that runs every evening that calls 2
stored
>> >procedures. The first sp inserts new records from a
>> >source database.table into a destination
database.table
>> >the second job updates the records that were changed
at
>> >the source and updates them at the destination.
>> >
>> >This job has run successfully without problems for 2+
>> >years. Now for the past two days the second step is
>> >blowing up the transaction log to more than 7GB (it's
>> >limit). Any idea what could cause this new behaviour?
>> >
>> >Any suggestions as to what I need to look at would be
>> >most appreciated.
>> >
>> >Thanks,
>> >Marc
>> >
>> >
>> >.
>> >
>
>.
>
Subscribe to:
Posts (Atom)