oracle – Sql Server Scheduler Closes Clob Data

We have a TSQL batch to move the data from an Oracle table to a SQL server table. When we run this batch manually, it works well, but when we run this batch by scheduler, the clob data is reduced. So we changed the batch into an SQL Server procedure. When we execute the proc manually, it works well, but when we run it through the scheduler by giving a procedure name to exec in the scheduler, the data is deleted for some unknown reason. Could anyone tell me why this is happening and what we can do to fix it?

How to see the expiration time that can be resumed from another session in Oracle?

How do I see the timeout setting about resuming another session? I found the function dbms_resumable.get_timeout, but there is no way to populate certain parameters on a specified session.

oracle – "lsnrctl stat listener" takes a long time

when i try to connect to the database with the help of sqlplus, the error "no listener" is detected, and when i found solutions and that i'm trying to use the command "lsnrctl start listener", it takes a lot of time and i could connect to the database when i leave the command prompt. and try to log in again I can not and I have to do the same command with the same time,
Can any one help me,
Thank you in advance.

oracle – Export SCHEMES and TABLES at the same time

My task is to export sch1, sch2 schemas

You must also export the table spaces ts88, ts89 from sch3

How can I do it in one operation?
Because if I run different operations:

DIRECTORY=dir
ESTIMATE_ONLY=Y
LOGFILE=file.log
SCHEMAS=sch1, sch2
PARALLEL=4

and

DIRECTORY=dir
ESTIMATE_ONLY=Y
LOGFILE=file.log
TABLES=sch3.ts88, sch3.ts89
PARALLEL=4

It works well.

But when I try something like this:

DIRECTORY=dir
ESTIMATE_ONLY=Y
LOGFILE=file.log
SCHEMAS=sch1, sch2
TABLES=sch3.ts88, sch3.ts89
PARALLEL=4

I have UDE-00010: several working modes requested, schema and tables.

If I understand correctly, it is not possible to run this export with schemas and tables.

Can I run it with INCLUDE or in any other way?
Or it must be 2 different operations?

How to install Oracle XE for local development and use autostart?

In reality, the solution is simple and includes the following basic steps, executed in the form oracle user:

  1. Change N at Y on the only entry in /etc/oratab
  2. Start the listener: lsnrctl start
  3. Start the database (s): dbstart

The automation of the above steps is a bit heavier.

Create the directory in which to store the scripts:

mkdir /home/oracle/scripts

Now create the three necessary scripts (environment configuration, start and stop):

/home/oracle/scripts/set_env.sh:

# Oracle Settings

# See: https://oracle-base.com/articles/linux/automating-database-startup-and-shutdown-on-linux#what-i-use

export TMP=/tmp
export TMPDIR=$TMP

export ORACLE_HOSTNAME=oracle7.localdomain
export ORACLE_UNQNAME=xe
export ORACLE_BASE=/opt/oracle
export ORACLE_HOME=$ORACLE_BASE/product/18c/dbhomeXE
export ORACLE_SID=xe

export PATH=/usr/sbin:/usr/local/bin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

/home/oracle/scripts/start_all.sh:

#!/bin/bash

# See: https://oracle-base.com/articles/linux/automating-database-startup-and-shutdown-on-linux

export ORAENV_ASK=NO
. oraenv
export ORAENV_ASK=YES

# This had to be moved after the "oraenv" call because that sets $ORACLE_HOME
# incorrectly due to the fact that "dbhome" returns "/home/oracle" instead of the
# actual/correct value, "/opt/oracle/product/18c/dbhomeXE".

. /home/oracle/scripts/set_env.sh

dbstart $ORACLE_HOME

/home/oracle/scripts/stop_all.sh:

#!/bin/bash

# See: https://oracle-base.com/articles/linux/automating-database-startup-and-shutdown-on-linux

export ORAENV_ASK=NO
. oraenv
export ORAENV_ASK=YES

# This had to be moved after the "oraenv" call because that sets $ORACLE_HOME
# incorrectly due to the fact that "dbhome" returns "/home/oracle" instead of the
# actual/correct value, "/opt/oracle/product/18c/dbhomeXE".

. /home/oracle/scripts/set_env.sh

dbshut $ORACLE_HOME

Finally, set the appropriate property and permissions on the scripts:

chown -R oracle:oinstall /home/oracle/scripts
chmod -R 770 /home/oracle/scripts

From Oracle Linux 7, systemd is included, making it easier to manage not only booting, but also automatic and graceful shutdown.

/home/oracle/scripts/enable-automatic-startup.sh:

#!/bin/sh

# Enable automatic startup for the default instance (otherwise, the "dbstart"
# command will not start it).

sed -i "s/:N/:Y/" /etc/oratab

# See: https://oracle-base.com/articles/linux/linux-services-systemd#creating-linux-services

cat > /lib/systemd/system/dbora.service <

Finally, run the script above to enable automatic startup and shutdown of the required Oracle database services:

./enable-automatic-startup.sh

Now, when the systems are rebooted, all required services will be launched automatically and customers will be able to connect successfully.

  1. Why the dbhome return of order /home/oracle? Is it because the current directory, /opt/oracle/product/18c/dbhomeXE, was not specified in a response file during installation?

backup – Windows is suddenly crashed it included Oracle DB

I had the Oracle XE 11G database on Windows 7, but the database installation files (Old oradata, fast_recovery, product and administrator) on the partition (D) that is backed up.
I've installed new windows with the same old name and a new Oracle database with the same path without performing RMAN or HOT backup,
What are the steps to replace the new database files with old ones?
I need to access my old database.

Thank you..

oracle – What is the difference between select count (1) from table and select count (*) from table

Nothing. count(1) is implicitly transformed into count(*). See below:

SQL> select tracefile from v$process where addr = (select paddr from v$session where sid = sys_context('userenv', 'sid'));

TRACEFILE
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/orcl_o71/ORCL/trace/ORCL_ora_3610.trc

SQL> alter session set events '10053 level 1';

Session altered.

SQL> select count(1) from dual;

  COUNT(1)
----------
         1

SQL> alter session set events '10053 off';

Session altered.

SQL> exit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.4.0.0.0
(oracle@o71 ~)$ grep "Final query" -A 1 /u01/app/oracle/diag/rdbms/orcl_o71/ORCL/trace/ORCL_ora_3610.trc
Final query after transformations:******* UNPARSED QUERY IS *******
SELECT COUNT(*) "COUNT(1)" FROM "SYS"."DUAL" "DUAL"
(oracle@o71 ~)$

How to perform an Oracle insert or update on BigQuery with the help of Pentaho?

I need to make a insert of Oracle to the Bigquery using Pentaho, as an example of the diagram below:

enter the description of the image here

But when I do this action, the following errors are generated at the next step:

2019/09/25 17:34:43 - Insert / update.0 - ERROR (version 8.3.0.0-371, build 8.3.0.0-371 from 2019-06-11 11.09.08 by buildguy) : Error in step, asking everyone to stop because of:

2019/09/25 17:34:43 - Insert / update.0 - ERROR (version 8.3.0.0-371, build 8.3.0.0-371 from 2019-06-11 11.09.08 by buildguy) : org.pentaho.di.core.exception.KettleDatabaseException:

2019/09/25 17:34:43 - Insert / update.0 - Error inserting/updating row

2019/09/25 17:34:43 - Insert / update.0 - (Simba)JDBC Null pointer exception.

2019/09/25 17:34:43 - Insert / update.0 -

2019/09/25 17:34:43 - Insert / update.0 - at org.pentaho.di.core.database.Database.insertRow(Database.java:1324)

2019/09/25 17:34:43 - Insert / update.0 - at org.pentaho.di.core.database.Database.insertRow(Database.java:1248)

2019/09/25 17:34:43 - Insert / update.0 - at org.pentaho.di.core.database.Database.insertRow(Database.java:1236)

2019/09/25 17:34:43 - Insert / update.0 - at org.pentaho.di.core.database.Database.insertRow(Database.java:1224)

2019/09/25 17:34:43 - Insert / update.0 - at org.pentaho.di.trans.steps.insertupdate.InsertUpdate.lookupValues(InsertUpdate.java:114)

2019/09/25 17:34:43 - Insert / update.0 - at org.pentaho.di.trans.steps.insertupdate.InsertUpdate.processRow(InsertUpdate.java:299)

2019/09/25 17:34:43 - Insert / update.0 - at org.pentaho.di.trans.step.RunThread.run(RunThread.java:62)

2019/09/25 17:34:43 - Insert / update.0 - at java.lang.Thread.run(Unknown Source)

2019/09/25 17:34:43 - Insert / update.0 - Caused by: java.sql.SQLException: (Simba)JDBC Null pointer exception.

2019/09/25 17:34:43 - Insert / update.0 - at com.simba.googlebigquery.googlebigquery.utils.BQCoreUtils.getParamValueAsText(Unknown Source)

2019/09/25 17:34:43 - Insert / update.0 - at com.simba.googlebigquery.googlebigquery.dataengine.BQSQLExecutor.execute(Unknown Source)

2019/09/25 17:34:43 - Insert / update.0 - at com.simba.googlebigquery.jdbc.common.SPreparedStatement.executeWithParams(Unknown Source)

2019/09/25 17:34:43 - Insert / update.0 - at com.simba.googlebigquery.jdbc.common.SPreparedStatement.executeAnyUpdate(Unknown Source)

2019/09/25 17:34:43 - Insert / update.0 - at com.simba.googlebigquery.jdbc.common.SPreparedStatement.executeUpdate(Unknown Source)

2019/09/25 17:34:43 - Insert / update.0 - at org.pentaho.di.core.database.Database.insertRow(Database.java:1291)

2019/09/25 17:34:43 - Insert / update.0 - ... 7 more

2019/09/25 17:34:43 - Insert / update.0 - Caused by: java.lang.NullPointerException

What error am I doing? What should I do? How to perform an Oracle insert or update on BigQuery with the help of Pentaho?

oracle – recommended optimization tips for this query

I have these questions

select decode(sum(txn_amount),'',0,sum(txn_amount)) as txn_amount, count(txn_id) as txn_num,currency_code,(select rpt_descr from rptcodesmtb where rpt_code=txnmtb.txn_type and rpt_type='txn_type' and merchant_id= txnmtb.merchant_id) AS txn_type from txnmtb where txn_date between TO_DATE('2010-09-01', 'YYYY-MM-DD') and TO_DATE('2019-09-25', 'YYYY-MM-DD') and merchant_id = 2     and txn_type IN ('FTO','DOMACH','BUYCOUPON','INTBOOKTRF','SIDOMRTGS','RTGS','BP_SCHEDR','DOMRTGS','DOMESTICRTGS','BP','FT_SCHEDR','MC','DOMESTICACH','INTERNAL','DOMBOOK','NETBANK','PP_SCHEDR','PREPAID','ACH','DOMESTIC','SIDOMBOOKTR','SIBILLPAY','SENDGIFT','FTA','FTB','SIDOMACH','MC_SCHEDULER','SIDOMESTIC','DOMESTICBOOKTRF','REDEEMGIFT','FTL','MCC_SCHEDR')       group by merchant_id, txn_type,currency_code

I receive a total cost of 6062, the main costs being due to the choice of date range. even after adding an index for the where condition like create index txnmtb_idx on txnmtb (TO_DATE (txn_date), merchant_id, txn_type), I did not notice a decrease in costs, even if the function is changed from to_date to to_timestamp, but even less if the data type of the column for txn_date is DATE. The oracle database still seems to perform a full scan of the table, whether the index is present or not.

In addition, i would like to know if left JOIN is possible to delete this query, select rpt_descr from rptcodesmtb where rpt_code = txnmtb.txn_type and rpt_type = & # 39; txn_type & # 39; and merchant_id = txnmtb.merchant_id, the cost for this query is 1

Chronological list of Oracle trace declarations

I use a trace in Oracle (11.2) to elucidate the behavior of a client application and wish to display a list of all the SQL statements submitted by it, in the order in which they were submitted. To this end, I activate the trace using:

EXECUTE DBMS_SYSTEM.SET_EV(, , 10046, 12, '')

This produces a trace file containing the information I am looking for, albeit in a format that is hard to consume. As a result, I format the raw trace using:

tkprof raw.trc formatted.out explain=user/password aggregate=no sys=no waits=yes record=formatted.rec

This facilitates readability, but also masks the sequence of actual events in both output files (formatted.out, formatted.rec).

Specifically, the client keeps the cursors open for current instructions and reruns them on subsequent invocations. The default behavior (and only?) Of tkprof is as follows:

  • in the main exit file, combines statistics for all executions
  • in the record file, list only the declaration during the initial analysis

(The latter assumes that I have the chance to catch the scan event in the first place, which usually requires emptying the shared pool before allowing the trace, but I m โ€‹โ€‹# 39, away from the subject.)

For example, if the customer submits instructions such as the following and keep cursors open:

SELECT column1 FROM table1 WHERE column2 = :value;
INSERT INTO table1 SET column1 = :value1, column2 = :value2;
SELECT column1 FROM table1 WHERE column2 = :value;

The output record file displays only two declarations:

SELECT column1 FROM table1 WHERE column2 = :value;
INSERT INTO table1 SET column1 = :value1, column2 = :value2;

As far as I know, there is no evidence that the first statement was executed a second time.

Similarly, the main trace file will only show reports for two statements, although the first one indicates that the SELECT statement has been executed / retrieved twice (only a partial extract follows):

call     count       cpu    elapsed       disk      query    current        rows    
------- ------  -------- ---------- ---------- ---------- ----------  ----------    
Parse        1      0.00       0.00          0          0          0           0    
Execute      2      0.00       0.00          0          4          6           0    
Fetch        2      0.00       0.00          0          0          0           2    
------- ------  -------- ---------- ---------- ---------- ----------  ----------
total        5      0.00       0.00          0          4          6           2

This formatted output does not clearly indicate which instructions were executed in which order – only one of them was executed twice at one time.

As far as I know, I have to read the raw trace file and painstakingly trace the cursor numbers (some of which are reused for different statements over time) to understand the actual sequence of events, such as are produced in the real world. .

My question is:

  • Is there a tkprof option, or a tkprof alternative, to extract from a raw trace file the list of all the SQL statements of the
    their order of execution, even in cases where the same cursor is
    kept open and run multiple times?

Ideally, this output would include the values โ€‹โ€‹of the binding variables for each run, or at least the SQL ID or cursor ID, so that I could go back and find them in the raw trace.

Thank you in advance.