DBA Blogs

Restoring a Datafile into ASM

Hemant K Chitale - Sat, 2020-05-23 03:52
What happens to the file name when you restore a datafile into RAC ?

I create a new tablespace and datafile.


I then make a backup of the tablespace/datafile


I shutdown the database and remove the datafile physically





Now I startup the database and restore the datafile




Now, I recover the datafile



Now, I check the datafile name




The alert log also shows me the restored (new) file name



The trailing portion of the file name changed from "t1.303.1041178221" to "t1.303.1041179951".
(The "t1" is actually the Tablespace Name).

So, we can see that ASM actually renames the file --- it is an Oracle Managed File.  Every time, you place (i.e. restore) a datafile into ASM, the file name is changed.  However, the controlfile and data dictionary are also updated correctly.

Categories: DBA Blogs

RMAN Backup of a Standby Database

Hemant K Chitale - Fri, 2020-05-22 09:02
A Standby Database can be backed up even when Recovery is in progress. The ArchiveLogs at the Standby can also be backed up.

STDBYDB>sqlplus '/ as sysdba'

SQL*Plus: Release 19.0.0.0.0 - Production on Fri May 22 21:49:08 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 1207955552 bytes
Fixed Size 9134176 bytes
Variable Size 436207616 bytes
Database Buffers 754974720 bytes
Redo Buffers 7639040 bytes
Database mounted.
SQL> alter database recover managed standby database disconnect from session;

Database altered.

SQL> quit
Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
STDBYDB>cd
STDBYDB>rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Fri May 22 21:51:33 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCLCDB (DBID=2778483057, not open)

RMAN> backup as compressed backupset database ;

Starting backup at 22-MAY-20
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=261 device type=DISK
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/opt/oracle/oradata/STDBYDB/system01.dbf
input datafile file number=00003 name=/opt/oracle/oradata/STDBYDB/sysaux01.dbf
input datafile file number=00004 name=/opt/oracle/oradata/STDBYDB/undotbs01.dbf
input datafile file number=00007 name=/opt/oracle/oradata/STDBYDB/users01.dbf
channel ORA_DISK_1: starting piece 1 at 22-MAY-20
channel ORA_DISK_1: finished piece 1 at 22-MAY-20
piece handle=/opt/oracle/FRA/STDBYDB/STDBYDB/backupset/2020_05_22/o1_mf_nnndf_TAG20200522T215143_hdhp7hz1_.bkp tag=TAG20200522T215143 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00010 name=/opt/oracle/oradata/STDBYDB/ORCLPDB1/sysaux01.dbf
input datafile file number=00009 name=/opt/oracle/oradata/STDBYDB/ORCLPDB1/system01.dbf
input datafile file number=00012 name=/opt/oracle/oradata/STDBYDB/ORCLPDB1/users01.dbf
input datafile file number=00011 name=/opt/oracle/oradata/STDBYDB/ORCLPDB1/undotbs01.dbf
channel ORA_DISK_1: starting piece 1 at 22-MAY-20
channel ORA_DISK_1: finished piece 1 at 22-MAY-20
piece handle=/opt/oracle/FRA/STDBYDB/STDBYDB/8812E29864F47615E0530100007FA424/backupset/2020_05_22/o1_mf_nnndf_TAG20200522T215143_hdhp8m5x_.bkp tag=TAG20200522T215143 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting compressed full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00006 name=/opt/oracle/oradata/STDBYDB/pdbseed/sysaux01.dbf
input datafile file number=00005 name=/opt/oracle/oradata/STDBYDB/pdbseed/system01.dbf
input datafile file number=00008 name=/opt/oracle/oradata/STDBYDB/pdbseed/undotbs01.dbf
channel ORA_DISK_1: starting piece 1 at 22-MAY-20
channel ORA_DISK_1: finished piece 1 at 22-MAY-20
piece handle=/opt/oracle/FRA/STDBYDB/STDBYDB/88129263B99F4BBDE0530100007F7BDF/backupset/2020_05_22/o1_mf_nnndf_TAG20200522T215143_hdhp9dd9_.bkp tag=TAG20200522T215143 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:15
Finished backup at 22-MAY-20

Starting Control File and SPFILE Autobackup at 22-MAY-20
piece handle=/opt/oracle/FRA/STDBYDB/STDBYDB/autobackup/2020_05_22/o1_mf_s_1041112130_hdhp9w13_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 22-MAY-20

RMAN>
RMAN> backup archivelog all delete input;

Starting backup at 22-MAY-20
RMAN-06820: warning: failed to archive current log at primary database
cannot connect to remote database
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=2 RECID=23 STAMP=1036111049
input archived log thread=1 sequence=3 RECID=24 STAMP=1036111158
input archived log thread=1 sequence=4 RECID=25 STAMP=1036111512
input archived log thread=1 sequence=5 RECID=28 STAMP=1039904282
input archived log thread=1 sequence=6 RECID=27 STAMP=1039904282
input archived log thread=1 sequence=7 RECID=26 STAMP=1039904282
input archived log thread=1 sequence=8 RECID=29 STAMP=1039904380
input archived log thread=1 sequence=9 RECID=30 STAMP=1039905582
input archived log thread=1 sequence=10 RECID=31 STAMP=1039905628
input archived log thread=1 sequence=11 RECID=32 STAMP=1039905646
input archived log thread=1 sequence=12 RECID=33 STAMP=1039905901
input archived log thread=1 sequence=13 RECID=34 STAMP=1039905901
input archived log thread=1 sequence=14 RECID=36 STAMP=1040897941
input archived log thread=1 sequence=15 RECID=35 STAMP=1040897941
input archived log thread=1 sequence=16 RECID=37 STAMP=1040899336
input archived log thread=1 sequence=17 RECID=38 STAMP=1040899695
input archived log thread=1 sequence=18 RECID=41 STAMP=1040900079
input archived log thread=1 sequence=19 RECID=39 STAMP=1040900076
input archived log thread=1 sequence=20 RECID=40 STAMP=1040900078
input archived log thread=1 sequence=21 RECID=42 STAMP=1040900158
input archived log thread=1 sequence=22 RECID=43 STAMP=1040900194
input archived log thread=1 sequence=23 RECID=44 STAMP=1040900973
input archived log thread=1 sequence=24 RECID=45 STAMP=1040901045
input archived log thread=1 sequence=25 RECID=46 STAMP=1040901776
input archived log thread=1 sequence=26 RECID=47 STAMP=1040901781
input archived log thread=1 sequence=27 RECID=48 STAMP=1041112167
input archived log thread=1 sequence=28 RECID=50 STAMP=1041112168
input archived log thread=1 sequence=29 RECID=49 STAMP=1041112167
channel ORA_DISK_1: starting piece 1 at 22-MAY-20
channel ORA_DISK_1: finished piece 1 at 22-MAY-20
piece handle=/opt/oracle/FRA/STDBYDB/STDBYDB/backupset/2020_05_22/o1_mf_annnn_TAG20200522T215348_hdhpcf7y_.bkp tag=TAG20200522T215348 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:45
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/opt/oracle/archivelog/STDBYDB/1_2_1036108814.dbf RECID=23 STAMP=1036111049
archived log file name=/opt/oracle/archivelog/STDBYDB/1_3_1036108814.dbf RECID=24 STAMP=1036111158
archived log file name=/opt/oracle/archivelog/STDBYDB/1_4_1036108814.dbf RECID=25 STAMP=1036111512
archived log file name=/opt/oracle/archivelog/STDBYDB/1_5_1036108814.dbf RECID=28 STAMP=1039904282
archived log file name=/opt/oracle/archivelog/STDBYDB/1_6_1036108814.dbf RECID=27 STAMP=1039904282
archived log file name=/opt/oracle/archivelog/STDBYDB/1_7_1036108814.dbf RECID=26 STAMP=1039904282
archived log file name=/opt/oracle/archivelog/STDBYDB/1_8_1036108814.dbf RECID=29 STAMP=1039904380
archived log file name=/opt/oracle/archivelog/STDBYDB/1_9_1036108814.dbf RECID=30 STAMP=1039905582
archived log file name=/opt/oracle/archivelog/STDBYDB/1_10_1036108814.dbf RECID=31 STAMP=1039905628
archived log file name=/opt/oracle/archivelog/STDBYDB/1_11_1036108814.dbf RECID=32 STAMP=1039905646
archived log file name=/opt/oracle/archivelog/STDBYDB/1_12_1036108814.dbf RECID=33 STAMP=1039905901
archived log file name=/opt/oracle/archivelog/STDBYDB/1_13_1036108814.dbf RECID=34 STAMP=1039905901
archived log file name=/opt/oracle/archivelog/STDBYDB/1_14_1036108814.dbf RECID=36 STAMP=1040897941
archived log file name=/opt/oracle/archivelog/STDBYDB/1_15_1036108814.dbf RECID=35 STAMP=1040897941
archived log file name=/opt/oracle/archivelog/STDBYDB/1_16_1036108814.dbf RECID=37 STAMP=1040899336
archived log file name=/opt/oracle/archivelog/STDBYDB/1_17_1036108814.dbf RECID=38 STAMP=1040899695
archived log file name=/opt/oracle/archivelog/STDBYDB/1_18_1036108814.dbf RECID=41 STAMP=1040900079
archived log file name=/opt/oracle/archivelog/STDBYDB/1_19_1036108814.dbf RECID=39 STAMP=1040900076
archived log file name=/opt/oracle/archivelog/STDBYDB/1_20_1036108814.dbf RECID=40 STAMP=1040900078
archived log file name=/opt/oracle/archivelog/STDBYDB/1_21_1036108814.dbf RECID=42 STAMP=1040900158
archived log file name=/opt/oracle/archivelog/STDBYDB/1_22_1036108814.dbf RECID=43 STAMP=1040900194
archived log file name=/opt/oracle/archivelog/STDBYDB/1_23_1036108814.dbf RECID=44 STAMP=1040900973
archived log file name=/opt/oracle/archivelog/STDBYDB/1_24_1036108814.dbf RECID=45 STAMP=1040901045
archived log file name=/opt/oracle/archivelog/STDBYDB/1_25_1036108814.dbf RECID=46 STAMP=1040901776
archived log file name=/opt/oracle/archivelog/STDBYDB/1_26_1036108814.dbf RECID=47 STAMP=1040901781
archived log file name=/opt/oracle/archivelog/STDBYDB/1_27_1036108814.dbf RECID=48 STAMP=1041112167
archived log file name=/opt/oracle/archivelog/STDBYDB/1_28_1036108814.dbf RECID=50 STAMP=1041112168
archived log file name=/opt/oracle/archivelog/STDBYDB/1_29_1036108814.dbf RECID=49 STAMP=1041112167
Finished backup at 22-MAY-20

Starting Control File and SPFILE Autobackup at 22-MAY-20
piece handle=/opt/oracle/FRA/STDBYDB/STDBYDB/autobackup/2020_05_22/o1_mf_s_1041112439_hdhpdvgv_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 22-MAY-20

RMAN>


The controlfile backup at a Standby is marked as a Standby Control File.

STDBYDB>rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Fri May 22 21:55:11 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCLCDB (DBID=2778483057, not open)

RMAN> list backup of controlfile;

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
5 Full 17.95M DISK 00:00:01 22-MAY-20
BP Key: 5 Status: AVAILABLE Compressed: NO Tag: TAG20200522T215259
Piece Name: /opt/oracle/FRA/STDBYDB/STDBYDB/autobackup/2020_05_22/o1_mf_s_1041112130_hdhp9w13_.bkp
Standby Control File Included: Ckp SCN: 4962504 Ckp time: 22-MAY-20

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
7 Full 17.95M DISK 00:00:01 22-MAY-20
BP Key: 7 Status: AVAILABLE Compressed: NO Tag: TAG20200522T215434
Piece Name: /opt/oracle/FRA/STDBYDB/STDBYDB/autobackup/2020_05_22/o1_mf_s_1041112439_hdhpdvgv_.bkp
Standby Control File Included: Ckp SCN: 4963994 Ckp time: 22-MAY-20

RMAN>


This is different from the controlfile backup at the Primary database :

oracle19c>rman target /

Recovery Manager: Release 19.0.0.0.0 - Production on Fri May 22 22:00:06 2020
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle and/or its affiliates. All rights reserved.

connected to target database: ORCLCDB (DBID=2778483057)

RMAN> list backup of controlfile;

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
1 Full 17.95M DISK 00:00:01 23-FEB-20
BP Key: 1 Status: AVAILABLE Compressed: NO Tag: TAG20200223T224744
Piece Name: /opt/oracle/product/19c/dbhome_1/dbs/c-2778483057-20200223-00
Control File Included: Ckp SCN: 4648095 Ckp time: 23-FEB-20

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
2 Full 17.95M DISK 00:00:01 27-MAR-20
BP Key: 2 Status: AVAILABLE Compressed: NO Tag: TAG20200327T000044
Piece Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2020_03_27/o1_mf_s_1036108844_h7snffbx_.bkp
Control File Included: Ckp SCN: 4798190 Ckp time: 27-MAR-20

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
6 Full 18.02M DISK 00:00:01 22-MAY-20
BP Key: 6 Status: AVAILABLE Compressed: NO Tag: TAG20200522T215930
Piece Name: /opt/oracle/FRA/ORCLCDB/ORCLCDB/autobackup/2020_05_22/o1_mf_s_1041112770_hdhpp2vc_.bkp
Control File Included: Ckp SCN: 4965065 Ckp time: 22-MAY-20

RMAN>


You can see that here the controlfile backup doesn't say "Primary" but just "Control File"


Categories: DBA Blogs

SSIS in AWS RDS

Pakistan's First Oracle Blog - Fri, 2020-05-22 03:38
Whenever migrating a SQL Server database from on-prem to AWS Cloud, my first preference is always to move it to AWS RDS, the managed database service. So whenever a client asks me to migrate an on-prem SQL Server database, my first question is:


Do you need to access filesystem as part of this database operations?

(Secretly wishing the answer would be NO), but more often than not, SSIS is the deal breaker in such database migration and the database ends up on an EC2 instance, which is still better than having it on-prem.

Managing a SQL Server on EC2 seems like a heavy chore when your other SQL Servers are humming smoothly on RDS and you know you don't have to nurse and babysit them. Well the prayers have been answered and the days of looking at those EC2 based SQL Servers having SSIS are numbered

AWS has announced SSIS support on RDS. For now, its only compatible with either SQL Server 2016 and 2017, which is a bit of a bummer, but still a welcome thing. SSIS is enabled through option groups in RDS and you have to do the S3 integration which is fairly straight forward. You can find step by step instructions here.

Looking forward to migrate my SSIS-struck EC2 based SQL Servers to RDS now.


Categories: DBA Blogs

Date Partitioning a table

Tom Kyte - Fri, 2020-05-22 02:26
Hi Tom I have a system I am working on that will require old data to be removed every week or so. The customer does not want any down time. Is it possible to create a partitioned table that I could define 31 or so partitions and load data in a dif...
Categories: DBA Blogs

Scheduler Jobs not starting at the requested time

Tom Kyte - Fri, 2020-05-22 02:26
Solaris 11.4 Oracle 12.2 Hi, we have about 70 Schedueler jobs defined, which are starting in intervalls from 1 minute to about 1 month. Sometimes (about 2 to 3 times a weeks) there are time-windows of about 1 to 60 minutes , in which none of th...
Categories: DBA Blogs

View pdf file (saved in db server directory)through form or report in oracle ebs

Tom Kyte - Fri, 2020-05-22 02:26
I have this file named for example" contarct1.pdf" that is saved in the database server directory I can retrieve that directory path an details when i query from "dba_directories" view I need to view this pdf file to user with any of the two op...
Categories: DBA Blogs

Clarification about A-Time column in execution plan

Tom Kyte - Fri, 2020-05-22 02:26
Hi Experts, I've the following execution plan. As you can see, the "A-Time" of operation 3 is bigger than its parent. Operation 2 behave in the same way with it's parent. Furthermore operation 0 has only 0.03 second. How can this happens? The "A-T...
Categories: DBA Blogs

Strange Behaviour with Oracle_Home and trailing slash

Tom Kyte - Fri, 2020-05-22 02:26
Hi, I have a server hosting several instances based on the same oracle engine 19.3 (19.0.0.0). As an example, let's consider DB1 and DB2. We connect to the server with a user account member of dba group. In the same user session : <code> e...
Categories: DBA Blogs

Making a transparent application database switchover with Data Guard

Tom Kyte - Fri, 2020-05-22 02:26
Hello, Ask TOM Team. I have a 2-node RAC production database (18.6) with Data Guard configured (using data guard broker). My app's connection strings are pointing to scan-name\db-service. What are the steps to make a database switchover withou...
Categories: DBA Blogs

How to Restore ASM Based OCR After Complete Loss of the CRS Diskgroup on Linux/Unix Systems for the 18.c version

Tom Kyte - Fri, 2020-05-22 02:26
I have seen tho note: "How to Restore ASM Based OCR After Complete Loss of the CRS Diskgroup on Linux/Unix Systems (Doc ID 1062983.1)" This note: "APPLIES TO: Oracle Database - Enterprise Edition - Version 11.2.0.1.0 and later" In the step ...
Categories: DBA Blogs

Keep changed data in a separate table

Tom Kyte - Thu, 2020-05-21 08:06
Hi, Application team have a requirement to keep data changes on particular tables (All dml's) for 4 days to generate reports. Initially I found below 3 options - 1. Create a trigger for DML's on tables which will insert changed data in another...
Categories: DBA Blogs

Want to pass schema name as dynamically mode

Tom Kyte - Thu, 2020-05-21 08:06
My question is over here that I just wann to pass 'Scott' schema name as dynamic,because schema name has changed every 2 month like 'scott.1' and ' scott.2' and so on. Select prod_id,prod_name from product A,scott.product_details B where A.prod_...
Categories: DBA Blogs

Scraping web data

RDBMS Insight - Wed, 2020-05-20 20:36

I wanted to get some data off an agent listing website and into a spreadsheet. I’d been meaning to play around with python for web scraping and this was the perfect excuse: There were just enough results that it would take longer to manually copy and paste them than to write a little python program. (I never want to automate something that will take less time to do than to automate, as long as I’m only going to do it once or twice…)

To get and post the search form, I used requests rather than urllib because dang! is it ever easier to work with. The requests session kept track of the requisite asp.net sessionID cookie without a single line of code on my part. I used BeautifulSoup to process the HTML result pages, and it was fun, if counterintuitive.

# import libraries
import requests
import re
from bs4 import BeautifulSoup
 
# the advanced search for agentquery.com is
# https://agentquery.com/search_advanced.aspx
# it's an aspx page which requires the session cookie to be set
# so we'll make a requests session
s = requests.Session()
 
# first HTTP request without form data 
# get the form to set session cookie and get some hidden form values
myurl = 'https://agentquery.com/search_advanced.aspx'
f = s.get(myurl)
 
# parse and retrieve three vital form values
soup = BeautifulSoup(f.text)
viewstate = soup.select("#__VIEWSTATE")[0]['value']
viewstategenerator = soup.select("#__VIEWSTATEGENERATOR")[0]['value']
eventvalidation = soup.select("#__EVENTVALIDATION")[0]['value']
 
# fill the form data 
 
# Here are the boxes I want checked:
# ctl00$chkFiction$15 is Middle Grade
# ctl00$chkFiction$22 is Science Fiction
# ctl00$chkFiction$8 is Fantasy
 
# ctl00$btnSearch is the search button, must set to 'Search' for the POST to return results
# ctl00$drpSeek is the drop-down for "ARE YOU LOOKING FOR AN AGENT WHO IS ACTIVELY SEEKING NEW CLIENTS?"
 
mypayload = {
    '__EVENTVALIDATION': eventvalidation,
    '__VIEWSTATE': viewstate,
    '__VIEWSTATEGENERATOR':viewstategenerator,
    'ctl00$chkFiction$15': 'on',
    'ctl00$chkFiction$22': 'on',
    'ctl00$chkFiction$8': 'on',
    'ctl00$btnSearch': 'Search',
    'ctl00$drpSeek': 'Yes'
}
 
# Now we can make the second HTTP request with form data
# this gets the first page of results
f = s.post(myurl,mypayload)
 
# open output file for writing
try:
    file = open('tmp.csv', 'w')
except:
    print('Could not open output file\n')
 
getmore='true'
 
while(getmore=='true'):
 
  # parse the html 
  soup = BeautifulSoup(f.text)
 
  # sift out the agent data from this page
  results=soup.find_all(id=re.compile("dlResults_ctl.*(lnkAgent$|lnkAgency|lblEmail)"))
 
  # example output:
  # <a class="result" href="agent.aspx?agentid=1128" id="ctl00_dlResults_ctl00_lnkAgent">Suzie Townsend</a>
  # <a class="result" href="http://www.publishersmarketplace.com/members/sztownsend81//" id="ctl00_dlResults_ctl00_lnkAgency" target="_blank" rel="noopener noreferrer">New Leaf Literary and Media</a>
  # <span id="ctl00_dlResults_ctl00_lblEmail">query@newleafliterary.com, put QUERY--SUZIE in the subject line</span>
 
  for i in range(0,len(results),3):
 
    a_agentid=results[i].get_attribute_list('id')[0]
    a_agentlink=results[i].get_attribute_list('href')[0]
    a_agentname=results[i].text
    a_agencyid=results[i+1].get_attribute_list('id')[0]
    a_agencyurl=results[i+1].get_attribute_list('href')[0]
    a_agencyname=results[i+1].text
    a_email=results[i+2].text
 
    # the url may be blank
    if a_agencyurl==None:
    	a_agencyurl=''
    # create a row, values delimited by "|"
    row=a_agentname + "|https://agentquery.com/" +  a_agentlink + "|" + a_agencyname + "|" + a_agencyurl + "|" + a_email
    # print to screen
    print(row)
    # print to file
    file.write(row+"\n")
 
  # is there a next page? if so, load it. If not, stop.
  # the "Next" link is present only if there are more results & has id 'ctl00_Pager1_lbtnNext'
  n=soup.find(id='ctl00_Pager1_lbtnNext');
  if n==None:
  	getmore='false'
  elif n.text=='Next':
    myurl="https://agentquery.com/" + n.get_attribute_list('href')[0]
    f = s.post(myurl,mypayload)
  else:
  	getmore='false'
 
#while loop ends here 
file.close()
Categories: DBA Blogs

INDEX SCAN ON TIMESTAMP COLUMN

Tom Kyte - Wed, 2020-05-20 13:46
Hi Tom, PFB query structure: <code>select /*+ gather_plan_statistics */ count(*) FROM CLPT INNER JOIN SL ON CLPT.SHIPPER_LOT_ID = SL.ID AND CLPT.CLIENT_ID = SL.CLIENT_ID WHERE CLPT.CLIENT_ID = 3104 and CLPT.STATUS = 1 AND CLPT.CR...
Categories: DBA Blogs

Optimize a Package

Tom Kyte - Wed, 2020-05-20 13:46
Hello Experts, We have a package with object of 50 variables. Package will be invoked with 400 to 500 records with input object and We are looping through this 400 to 500 records and do some manipulation and insert into 4 tables which has sequence...
Categories: DBA Blogs

Partial indexing using multiple function indexes or precomputed json object

Tom Kyte - Wed, 2020-05-20 13:46
I have an orders table with a status flag showing whether that order has been completed. Most orders in the tables are completed: probably only 5%ish of the table are incomplete. I know I can build a partial index by using a function based index whi...
Categories: DBA Blogs

V$RECOVER_FILE and PDB$SEED and Standby Database

Hemant K Chitale - Tue, 2020-05-19 21:51
As a follow up to my previous post where I showed, with other things, that V$RECOVER_FILE may show PDB$SEED files as well, this is what I currently see on my Production (Primary) database :

SQL> l
1 select p.name pdbname, r.file#, f.name filename, r.online_status, r.error, r.time
2 from v$pdbs p, v$recover_file r, v$datafile f
3 where p.con_id=r.con_id
4 and r.con_id=f.con_id
5 and r.file#=f.file#
6* order by 1,2
SQL> /

PDBNAME FILE# FILENAME ONLINE_ ERROR TIME
------------ ---------- --------------------------------------------------- ------- -------- ---------
PDB$SEED 5 /opt/oracle/oradata/ORCLCDB/pdbseed/system01.dbf ONLINE 04-MAY-19
PDB$SEED 6 /opt/oracle/oradata/ORCLCDB/pdbseed/sysaux01.dbf ONLINE 04-MAY-19
PDB$SEED 8 /opt/oracle/oradata/ORCLCDB/pdbseed/undotbs01.dbf ONLINE 04-MAY-19

SQL>


and on my Standby database

SQL> l
1 select p.name pdbname, r.file#, f.name filename, r.online_status, r.error, r.time
2 from v$pdbs p, v$recover_file r, v$datafile f
3 where p.con_id=r.con_id
4 and r.con_id=f.con_id
5 and r.file#=f.file#
6* order by 1,2
SQL> /

PDBNAME FILE# FILENAME ONLINE_ ERROR TIME
------------ ---------- --------------------------------------------------- ------- -------- ---------
PDB$SEED 5 /opt/oracle/oradata/STDBYDB/pdbseed/system01.dbf ONLINE 04-MAY-19
PDB$SEED 6 /opt/oracle/oradata/STDBYDB/pdbseed/sysaux01.dbf ONLINE 04-MAY-19
PDB$SEED 8 /opt/oracle/oradata/STDBYDB/pdbseed/undotbs01.dbf ONLINE 04-MAY-19

SQL>


Now I go back to my Production (Primary) database and run these commands :

SQL> alter pluggable database pdb$seed open read write;
alter pluggable database pdb$seed open read write
*
ERROR at line 1:
ORA-65019: pluggable database PDB$SEED already open


SQL> alter pluggable database pdb$seed close;

Pluggable database altered.

SQL> alter pluggable database pdb$seed open read write;

Pluggable database altered.

SQL> alter pluggable database pdb$seed close;

Pluggable database altered.

SQL> alter pluggable database pdb$seed open read only;

Pluggable database altered.

SQL>
SQL> l
1 select p.name pdbname, r.file#, f.name filename, r.online_status, r.error, r.time
2 from v$pdbs p, v$recover_file r, v$datafile f
3 where p.con_id=r.con_id
4 and r.con_id=f.con_id
5 and r.file#=f.file#
6* order by 1,2
SQL> /

no rows selected

SQL>


So, switching the PDB$SEED to READ WRITE and back to READ ONLY clears the entry in V$RECOVER_FILE in the Production (Primary) database.

But on the Standby, I now see :

SQL> l
1 select p.name pdbname, r.file#, f.name filename, r.online_status, r.error, r.time
2 from v$pdbs p, v$recover_file r, v$datafile f
3 where p.con_id=r.con_id
4 and r.con_id=f.con_id
5 and r.file#=f.file#
6* order by 1,2
SQL> /

PDBNAME FILE# FILENAME ONLINE_ ERROR TIME
------------ ---------- --------------------------------------------------- ------- -------- ---------
PDB$SEED 5 /opt/oracle/oradata/STDBYDB/pdbseed/system01.dbf ONLINE 20-MAY-20
PDB$SEED 6 /opt/oracle/oradata/STDBYDB/pdbseed/sysaux01.dbf ONLINE 20-MAY-20
PDB$SEED 8 /opt/oracle/oradata/STDBYDB/pdbseed/undotbs01.dbf ONLINE 20-MAY-20

SQL>


So, now the Standby knows that the PDB$SEED needs recovery from 20-May-20 onwards. 
Normally, I would not be opening the PDB$SEED database on the Standby OR even on the Production (Primary) database.


Categories: DBA Blogs

Peculiar partitioning method and automatic multiple column list partitioning.

Tom Kyte - Tue, 2020-05-19 19:26
Hi I am a great fan of Oracle partitioning. In our environment there is however few shortcomings - unability to partition data by more than one date column - unability to subpartition data by non fixed number of partitions. (without any human...
Categories: DBA Blogs

connection settings optimization for a satellite channel between client and server (ping > 1.5 seconds)

Tom Kyte - Tue, 2020-05-19 01:06
Hello. Please if you can give any advice for a specific situation: a client (PL SQL Developer, Toad, or any other development tool) connects over a VPN to a server thru a satellite channel (quarantine and remote work). No big amounts of data tran...
Categories: DBA Blogs

PLSQL Package size

Tom Kyte - Tue, 2020-05-19 01:06
Hi, I am having a package with 10 Procedures in it. I am calling all the procedure's in the package in async call from different sessions. Is there any memory/performance impact on this.
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs