Feed aggregator

The ‘Unprecedented Challenge’ of Cybersecurity in an Age of Burgeoning Threats

Oracle Press Releases - Mon, 2019-05-20 15:43
Blog
The ‘Unprecedented Challenge’ of Cybersecurity in an Age of Burgeoning Threats

Barbara Darrow, Senior Director, Communications, Oracle—May 20, 2019

Technological and legal complexities abound in this age of heightened cybersecurity threats—including a rise in state-sponsored hacking. This “unprecedented challenge” was the topic of conversation between Dorian Daley, Oracle executive vice president and general counsel, and Edward Screven, Oracle’s chief corporate architect. Here are five key takeaways from their conversation:

1. Some good news: Businesses are aware of cybersecurity challenges in a way they were not even just a few years ago, when many considered security, generally, as a priority, but didn’t go much beyond that thought according to Screven. “It’s [become] a front-and-center kind of issue for our customers,” Daley agreed.

2. These same customers would like to make data security “someone else’s problem,” and are right to think that way, Screven added. In this context, that “someone else” is a tech vendor able to design technology that is inherently more secure than what non-tech businesses could design for themselves.

3. Regulations around data privacy are getting more complicated, starting with the European Union’s General Data Protection Regulation, Daley noted. The issues of data privacy and data security constitute slightly different sides to the same problem, she said, adding “what’s happening on the privacy side is really an explosion of regulatory frameworks around the world.”

4. There’s only so much that employees can do—no matter how skilled they may be. Recent research shows that while most companies cite human error as a leading cause of data insecurity, they also keep throwing more people at a problem that can’t really be solved without a level of automation commensurate with the sophistication and volume of attacks. “There is a lack of sufficient awareness about what technology can actually do for customers,” Daley noted.

Fast, “autonomous” or self-applying software patches and updates are a solid way to mitigate or even prevent data loss from cyber hacks. Many of the attacks and subsequent data leaks over the past few years could have been avoided had available software patches been applied in a timely fashion.

AI and machine learning tech can catch far more anomalies, like unauthorized system access, which might indicate a security problem much faster than human experts can, eliminating issues before they get serious.

5. Screven is skeptical that international treaties, if such things could be crafted, would eradicate state-sponsored cyber hacking because much of that activity happens under the covers by contractors that can be disavowed by the states.

Thus, “the same person who’s out stealing your credit card today is out trying to steal plans for [Hellfire] missiles tomorrow,” Screven said.

Full video of the talk can be found here.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Oracle Certification 12c Notes – DBA Track

VitalSoftTech - Mon, 2019-05-20 10:58
Oracle 12c OCP Exam Notes (1z0-060) Oracle DBA 12c Certification Notes Oracle 12c – Manage Multitenant CDB and PDBs with EM Express Manage Multitenant CDB and PDBs with EM Express. Quick and easy steps to create, setup and deploy EM Express. Oracle 12c Database: Create 12c CDB, PDB Databases Using OUI When installing Oracle Software, […]
Categories: DBA Blogs

OIM/OIG – IDCS Integration : [Solved] javax.net.ssl.SSLHandshakeException : PKIX Path Building Failed

Online Apps DBA - Mon, 2019-05-20 07:49

[Troubleshooting]: OIM/OIG – IDCS Connector Issue : SSL Handshake & How To Fix Oracle IDCS Connnector is used to provision & reconcile users between OIM/OIG and IDCS. IDCS always listen on SSL (HTTPS) and you must import IDCS Certificates into OIM and If You don’t you’ll see SSL handshake Error while running schedule job IDCS […]

The post OIM/OIG – IDCS Integration : [Solved] javax.net.ssl.SSLHandshakeException : PKIX Path Building Failed appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

3200 Clever hackers are in my PC; wow!!

Pete Finnigan - Sun, 2019-05-19 21:06
Hackers are clever people; they must be to hack other people and take over their private data and steal identities and money. I have to draw the limit at the number of hackers who claim to be in my PC....[Read More]

Posted by Pete On 19/05/19 At 10:08 PM

Categories: Security Blogs

Shocking opatchauto resume works after auto-logout

Michael Dinh - Sun, 2019-05-19 12:36

WARNING: Please don’t try this at home or in production environment.

With that being said, patching was for DR production.

Oracle Interim Patch Installer version 12.2.0.1.16

Patching 2 nodes RAC cluster and node1 completed successfully.

Rationale for using -norestart because there was an issue at one time where datapatch was applied on the node1.

Don’t implement Active Data Guard and have database Start options: mount

# crsctl stat res -t -w '((TARGET != ONLINE) or (STATE != ONLINE)'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.dbproddr.db
      2        ONLINE  INTERMEDIATE node2              Mounted (Closed),STABLE
ora.dbproddr.dbdr.svc
      2        ONLINE  OFFLINE                                          STABLE
--------------------------------------------------------------------------------

$ srvctl status database -d dbproddr -v
Instance dbproddr1 is running on node node1 with online services dbdr. Instance status: Open,Readonly.
Instance dbproddr2 is running on node node2. Instance status: Mounted (Closed).

Run opatchauto and ctrl-c from session is stuck.

node2 ~ # export PATCH_TOP_DIR=/u01/software/patches/Jan2019

node2 ~ # $GRID_HOME/OPatch/opatchauto apply $PATCH_TOP_DIR/28833531 -norestart

OPatchauto session is initiated at Thu May 16 20:20:24 2019

System initialization log file is /u02/app/12.1.0/grid/cfgtoollogs/opatchautodb/systemconfig2019-05-16_08-20-26PM.log.

Session log file is /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/opatchauto2019-05-16_08-20-47PM.log
The id for this session is K43Y

Executing OPatch prereq operations to verify patch applicability on home /u02/app/12.1.0/grid

Executing OPatch prereq operations to verify patch applicability on home /u01/app/oracle/product/12.1.0/db
Patch applicability verified successfully on home /u01/app/oracle/product/12.1.0/db

Patch applicability verified successfully on home /u02/app/12.1.0/grid


Verifying SQL patch applicability on home /u01/app/oracle/product/12.1.0/db
"/bin/sh -c 'cd /u01/app/oracle/product/12.1.0/db; ORACLE_HOME=/u01/app/oracle/product/12.1.0/db ORACLE_SID=dbproddr2 /u01/app/oracle/product/12.1.0/db/OPatch/datapatch -prereq -verbose'" command failed with errors. Please refer to logs for more details. SQL changes, if any, can be analyzed by manually retrying the same command.

SQL patch applicability verified successfully on home /u01/app/oracle/product/12.1.0/db


Preparing to bring down database service on home /u01/app/oracle/product/12.1.0/db
Successfully prepared home /u01/app/oracle/product/12.1.0/db to bring down database service


Bringing down CRS service on home /u02/app/12.1.0/grid
Prepatch operation log file location: /u02/app/12.1.0/grid/cfgtoollogs/crsconfig/crspatch_node2_2019-05-16_08-21-16PM.log
CRS service brought down successfully on home /u02/app/12.1.0/grid


Performing prepatch operation on home /u01/app/oracle/product/12.1.0/db
Perpatch operation completed successfully on home /u01/app/oracle/product/12.1.0/db


Start applying binary patch on home /u01/app/oracle/product/12.1.0/db
Binary patch applied successfully on home /u01/app/oracle/product/12.1.0/db


Performing postpatch operation on home /u01/app/oracle/product/12.1.0/db
Postpatch operation completed successfully on home /u01/app/oracle/product/12.1.0/db


Start applying binary patch on home /u02/app/12.1.0/grid

Binary patch applied successfully on home /u02/app/12.1.0/grid


Starting CRS service on home /u02/app/12.1.0/grid





*** Ctrl-C as shown below ***
^C
OPatchauto session completed at Thu May 16 21:41:58 2019
*** Time taken to complete the session 81 minutes, 34 seconds ***

opatchauto failed with error code 130

This is not good as session disconnected while troubleshooting in another session.

node2 ~ # timed out waiting for input: auto-logout

Even though opatchauto session was terminated cluster upgrade state is [NORMAL] vs cluster upgrade state is [ROLLING PATCH]

node2 ~ # crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [323461694].

node2 ~ # crsctl stat res -t -w '((TARGET != ONLINE) or (STATE != ONLINE)'
node2 ~ # crsctl stat res -t -w 'TYPE = ora.database.type'
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details       
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.dbproddr.db
      1        ONLINE  ONLINE       node1              Open,Readonly,STABLE
      2        ONLINE  ONLINE       node2              Open,Readonly,STABLE
--------------------------------------------------------------------------------

At this point, I was not sure what to do since everything looked good and online.

Colleague helping me with troubleshooting stated patch completed successfully and the main question if we need to try “opatchauto resume”

However, I was not comfortable with the outcome and tried opatchauto resume and it worked like magic.

Reconnect and opatchauto resume

mdinh@node2 ~ $ sudo su - 
~ # . /home/oracle/working/dinh/gi.env
The Oracle base has been set to /u01/app/oracle
ORACLE_SID=+ASM4
ORACLE_BASE=/u01/app/oracle
GRID_HOME=/u02/app/12.1.0/grid
ORACLE_HOME=/u02/app/12.1.0/grid
Oracle Instance alive for sid "+ASM4"
~ # export PATCH_TOP_DIR=/u01/software/patches/Jan2019/
~ # $GRID_HOME/OPatch/opatchauto resume

OPatchauto session is initiated at Thu May 16 22:03:09 2019
Session log file is /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/opatchauto2019-05-16_10-03-10PM.log
Resuming existing session with id K43Y

Starting CRS service on home /u02/app/12.1.0/grid
Postpatch operation log file location: /u02/app/12.1.0/grid/cfgtoollogs/crsconfig/crspatch_node2_2019-05-16_10-03-17PM.log
CRS service started successfully on home /u02/app/12.1.0/grid


Preparing home /u01/app/oracle/product/12.1.0/db after database service restarted

OPatchauto is running in norestart mode. PDB instances will not be checked for database on the current node.
No step execution required.........
 

Trying to apply SQL patch on home /u01/app/oracle/product/12.1.0/db
SQL patch applied successfully on home /u01/app/oracle/product/12.1.0/db

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:node2
RAC Home:/u01/app/oracle/product/12.1.0/db
Version:12.1.0.2.0
Summary:

==Following patches were SKIPPED:

Patch: /u01/software/patches/Jan2019/28833531/26983807
Reason: This patch is not applicable to this specified target type - "rac_database"

Patch: /u01/software/patches/Jan2019/28833531/28729220
Reason: This patch is not applicable to this specified target type - "rac_database"


==Following patches were SUCCESSFULLY applied:

Patch: /u01/software/patches/Jan2019/28833531/28729213
Log: /u01/app/oracle/product/12.1.0/db/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-22-06PM_1.log

Patch: /u01/software/patches/Jan2019/28833531/28731800
Log: /u01/app/oracle/product/12.1.0/db/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-22-06PM_1.log


Host:node2
CRS Home:/u02/app/12.1.0/grid
Version:12.1.0.2.0
Summary:

==Following patches were SKIPPED:

Patch: /u01/software/patches/Jan2019/28833531/26983807
Reason: This patch is already been applied, so not going to apply again.


==Following patches were SUCCESSFULLY applied:

Patch: /u01/software/patches/Jan2019/28833531/28729213
Log: /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-23-32PM_1.log

Patch: /u01/software/patches/Jan2019/28833531/28729220
Log: /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-23-32PM_1.log

Patch: /u01/software/patches/Jan2019/28833531/28731800
Log: /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-16_20-23-32PM_1.log


Patching session reported following warning(s): 
_________________________________________________

[WARNING] The database instance 'drinstance2' from '/u01/app/oracle/product/12.1.0/db', in host'node2' is not running. SQL changes, if any,  will not be applied.
To apply. the SQL changes, bring up the database instance and run the command manually from any one node (run as oracle).
Refer to the readme to get the correct steps for applying the sql changes.

[WARNING] The database instances will not be brought up under the 'norestart' option. The database instance 'drinstance2' from '/u01/app/oracle/product/12.1.0/db', in host'node2' is not running. SQL changes, if any,  will not be applied.
To apply. the SQL changes, bring up the database instance and run the command manually from any one node (run as oracle).
Refer to the readme to get the correct steps for applying the sql changes.


OPatchauto session completed at Thu May 16 22:10:01 2019
Time taken to complete the session 6 minutes, 52 seconds
~ # 

Logs:

oracle@node2:/u02/app/12.1.0/grid/cfgtoollogs/crsconfig
> ls -alrt
total 508
drwxr-x--- 2 oracle oinstall   4096 Nov 23 02:15 oracle
-rwxrwxr-x 1 oracle oinstall 167579 Nov 23 02:15 rootcrs_node2_2018-11-23_02-07-58AM.log
drwxrwxr-x 9 oracle oinstall   4096 Apr 10 12:05 ..

opatchauto apply - Prepatch operation log file location: /u02/app/12.1.0/grid/cfgtoollogs/crsconfig/crspatch_node2_2019-05-16_08-21-16PM.log
====================================================================================================
-rwxrwxr-x 1 oracle oinstall  33020 May 16 20:22 crspatch_node2_2019-05-16_08-21-16PM.log
====================================================================================================

Mysterious log file - Unknown where this log is from because it was not from my terminal output.
====================================================================================================
-rwxrwxr-x 1 oracle oinstall  86983 May 16 21:42 crspatch_node2_2019-05-16_08-27-35PM.log
====================================================================================================

-rwxrwxr-x 1 oracle oinstall  56540 May 16 22:06 srvmcfg1.log
-rwxrwxr-x 1 oracle oinstall  26836 May 16 22:06 srvmcfg2.log
-rwxrwxr-x 1 oracle oinstall  21059 May 16 22:06 srvmcfg3.log
-rwxrwxr-x 1 oracle oinstall  23032 May 16 22:08 srvmcfg4.log

opatchauto resume - Postpatch operation log file location: /u02/app/12.1.0/grid/cfgtoollogs/crsconfig/crspatch_node2_2019-05-16_10-03-17PM.log
====================================================================================================
-rwxrwxr-x 1 oracle oinstall  64381 May 16 22:09 crspatch_node2_2019-05-16_10-03-17PM.log
====================================================================================================

Prepatch operation log file.

> tail -20 crspatch_node2_2019-05-16_08-21-16PM.log
2019-05-16 20:22:04: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH_OOP_REQSTEPS
2019-05-16 20:22:04: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH_OOP_REQSTEPS '
2019-05-16 20:22:04: Removing file /tmp/fileTChFoS
2019-05-16 20:22:04: Successfully removed file: /tmp/fileTChFoS
2019-05-16 20:22:04: pipe exit code: 0
2019-05-16 20:22:04: /bin/su successfully executed

2019-05-16 20:22:04: checkpoint ROOTCRS_POSTPATCH_OOP_REQSTEPS does not exist
2019-05-16 20:22:04: Done - Performing pre-pathching steps required for GI stack
2019-05-16 20:22:04: Resetting cluutil_trc_suff_pp to 0
2019-05-16 20:22:04: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state SUCCESS"
2019-05-16 20:22:04: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil0.log
2019-05-16 20:22:04: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state SUCCESS
2019-05-16 20:22:04: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state SUCCESS '
2019-05-16 20:22:04: Removing file /tmp/fileDoYyQA
2019-05-16 20:22:04: Successfully removed file: /tmp/fileDoYyQA
2019-05-16 20:22:04: pipe exit code: 0
2019-05-16 20:22:04: /bin/su successfully executed

*** 2019-05-16 20:22:04: Succeeded in writing the checkpoint:'ROOTCRS_PREPATCH' with status:SUCCESS ***

Mysterious log file – crspatch_node2_2019-05-16_08-27-35PM.log

2019-05-16 21:42:00: Succeeded in writing the checkpoint:'ROOTCRS_STACK' with status:FAIL
2019-05-16 21:42:00: ###### Begin DIE Stack Trace ######
2019-05-16 21:42:00:     Package         File                 Line Calling   
2019-05-16 21:42:00:     --------------- -------------------- ---- ----------
2019-05-16 21:42:00:  1: main            rootcrs.pl            267 crsutils::dietrap
2019-05-16 21:42:00:  2: crsutils        crsutils.pm          1631 main::__ANON__
2019-05-16 21:42:00:  3: crsutils        crsutils.pm          1586 crsutils::system_cmd_capture_noprint
2019-05-16 21:42:00:  4: crsutils        crsutils.pm          9098 crsutils::system_cmd_capture
2019-05-16 21:42:00:  5: crspatch        crspatch.pm           988 crsutils::startFullStack
2019-05-16 21:42:00:  6: crspatch        crspatch.pm          1121 crspatch::performPostPatch
2019-05-16 21:42:00:  7: crspatch        crspatch.pm           212 crspatch::crsPostPatch
2019-05-16 21:42:00:  8: main            rootcrs.pl            276 crspatch::new
2019-05-16 21:42:00: ####### End DIE Stack Trace #######

2019-05-16 21:42:00: ROOTCRS_POSTPATCH checkpoint has failed
2019-05-16 21:42:00:      ckpt: -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH
2019-05-16 21:42:00: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH"
2019-05-16 21:42:00: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil4.log
2019-05-16 21:42:00: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH
2019-05-16 21:42:00: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH '
2019-05-16 21:42:00: Removing file /tmp/filewniUim
2019-05-16 21:42:00: Successfully removed file: /tmp/filewniUim
2019-05-16 21:42:00: pipe exit code: 0
2019-05-16 21:42:00: /bin/su successfully executed

2019-05-16 21:42:00: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH -status"
2019-05-16 21:42:00: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil5.log
2019-05-16 21:42:00: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH -status
2019-05-16 21:42:00: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -chkckpt -name ROOTCRS_POSTPATCH -status '
2019-05-16 21:42:00: Removing file /tmp/fileK1Tyw6
2019-05-16 21:42:00: Successfully removed file: /tmp/fileK1Tyw6
2019-05-16 21:42:00: pipe exit code: 0
2019-05-16 21:42:00: /bin/su successfully executed

2019-05-16 21:42:00: The 'ROOTCRS_POSTPATCH' status is FAILED
2019-05-16 21:42:00: ROOTCRS_POSTPATCH state is FAIL
2019-05-16 21:42:00: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state FAIL"
2019-05-16 21:42:00: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil6.log
2019-05-16 21:42:00: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state FAIL
2019-05-16 21:42:00: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state FAIL '
2019-05-16 21:42:00: Removing file /tmp/filej20epR
2019-05-16 21:42:00: Successfully removed file: /tmp/filej20epR
2019-05-16 21:42:00: pipe exit code: 0
2019-05-16 21:42:00: /bin/su successfully executed

2019-05-16 21:42:00: Succeeded in writing the checkpoint:'ROOTCRS_POSTPATCH' with status:FAIL
2019-05-16 21:42:00: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_STACK -state FAIL"
2019-05-16 21:42:00: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil7.log
2019-05-16 21:42:00: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_STACK -state FAIL
2019-05-16 21:42:00: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_STACK -state FAIL '
2019-05-16 21:42:01: Removing file /tmp/filely834C
2019-05-16 21:42:01: Successfully removed file: /tmp/filely834C
2019-05-16 21:42:01: pipe exit code: 0
2019-05-16 21:42:01: /bin/su successfully executed

*** 2019-05-16 21:42:01: Succeeded in writing the checkpoint:'ROOTCRS_STACK' with status:FAIL ***

Postpatch operation log file.

> tail -20 crspatch_node2_2019-05-16_10-03-17PM.log
2019-05-16 22:09:59: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state START"
2019-05-16 22:09:59: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil7.log
2019-05-16 22:09:59: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state START
2019-05-16 22:09:59: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_PREPATCH -state START '
2019-05-16 22:09:59: Removing file /tmp/file0IogVl
2019-05-16 22:09:59: Successfully removed file: /tmp/file0IogVl
2019-05-16 22:09:59: pipe exit code: 0
2019-05-16 22:09:59: /bin/su successfully executed

2019-05-16 22:09:59: Succeeded in writing the checkpoint:'ROOTCRS_PREPATCH' with status:START
2019-05-16 22:09:59: Invoking "/u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state SUCCESS"
2019-05-16 22:09:59: trace file=/u01/app/oracle/crsdata/node2/crsconfig/cluutil8.log
2019-05-16 22:09:59: Running as user oracle: /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state SUCCESS
2019-05-16 22:09:59: s_run_as_user2: Running /bin/su oracle -c ' echo CLSRSC_START; /u02/app/12.1.0/grid/bin/cluutil -ckpt -oraclebase /u01/app/oracle -writeckpt -name ROOTCRS_POSTPATCH -state SUCCESS '
2019-05-16 22:09:59: Removing file /tmp/fileXDCkuM
2019-05-16 22:09:59: Successfully removed file: /tmp/fileXDCkuM
2019-05-16 22:09:59: pipe exit code: 0
2019-05-16 22:09:59: /bin/su successfully executed

*** 2019-05-16 22:09:59: Succeeded in writing the checkpoint:'ROOTCRS_POSTPATCH' with status:SUCCESS ***

Happy patching and hopefully patching primary to come will be seamlessly successful.

[Q/A] EBS (R12) on OCI : Is Goldengate Certified for EBS Migration to Cloud??

Online Apps DBA - Sun, 2019-05-19 05:57

[Q/A] Can Oracle GoldenGate be used to Migrate (Lift & Shift) Oracle EBS (R12) from On-Premise to Cloud? This is the question I’ve been asked regularly. I’ve covered this all at: Check at http://bit.ly/2JPZqmY including ▪Overview of Goldengate ▪Overview of EBS Migration to Oracle Cloud ▪Is Goldengate certified for EBS migration to Cloud? ▪Other options […]

The post [Q/A] EBS (R12) on OCI : Is Goldengate Certified for EBS Migration to Cloud?? appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[Troubleshooting] EBS Cloud Manager 19.1.1 Deployment Issue because of IAM Policy/Compartment: ProvisionOCINetwork.pl

Online Apps DBA - Sat, 2019-05-18 01:04

[Troubleshooting] EBS Cloud Manager 19.1.1 Deployment Issue because of IAM Policy/Compartment If you are deploying the latest EBS Cloud Manager using a new deployment method then this deployment will fail while creating Network on OCI at ProvisionOCINetwork.pl script with Error NotAuthorizedOrNotFound Check out our new blog at http://k21academy.com/ebscloud32 and follow the Step-Wise workaround which we […]

The post [Troubleshooting] EBS Cloud Manager 19.1.1 Deployment Issue because of IAM Policy/Compartment: ProvisionOCINetwork.pl appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Loading email content into oracle table

Tom Kyte - Fri, 2019-05-17 21:46
Hi Tom, I have an interesting requirement, I want to load complete emails ( example outlook) in oracle tables. => When a mail content ( preview) is greater than 4000 than store in attachement table with name "long content" and store complete...
Categories: DBA Blogs

Ora-12560: TNS: protocol adapter error

Tom Kyte - Fri, 2019-05-17 21:46
I use a single instance 12.2C 64-bit Oracle database on a window server 2012R2. suddenly this error <i>ORA-12560: TNS:protocol adapter error</i> began to show when I try to enter the sqlplus. whatever I have searched the internet a lot for a soluti...
Categories: DBA Blogs

Manipulate the autogenerated names for types inside packages

Tom Kyte - Fri, 2019-05-17 21:46
Hey, if you create a type inside a package, this type is created in the database with a name like 'SYS_...'. Is there any possibility to affect/influence the auto generated name? Or do i can rename it? And how? Why I asked that? I work a lot w...
Categories: DBA Blogs

exporting packages,function etc. from one user to another.

Tom Kyte - Fri, 2019-05-17 21:46
Hi, For example X user have many packages,functions,procedures etc. And I want to delete some of them after copying to another user (Y). I mean I want to classify packages,functions etc... I can copy-paste by using Procedure Builder. But this way...
Categories: DBA Blogs

SYSDATE behavior in SQL and PL/SQL

Tom Kyte - Fri, 2019-05-17 21:46
Hello, My quess: there are two different SYSDATE functions ? one defined in STANDARD package and another one somewhere ?inside? Oracle. Example: SQL> select * from dual; D - X SQL> select sysdate from user_objects where rownum=1;...
Categories: DBA Blogs

Using YAML for Build Configuration in Oracle Developer Cloud

OTN TechBlog - Thu, 2019-05-16 16:34

In the April release, we introduced support for YAML-based build configuration in Oracle Developer Cloud. This blog will introduce you to scripting YAML-based build configurations in Developer Cloud.

Before I explain how to create your first build configuration using YAML on Developer Cloud, let’s take a look at a few things.

Why do we need YAML configuration?

A YAML-based build configuration allows you to define and configure build jobs by creating YAML files that you can push to the Git repository where the application code that the build job will be building resides.

This allows you to version your build configuration and keep the older versions, should you ever need to refer back to them.  This is different from user interface-based build job configuration where once changes are saved there is no way to refer back to an older version.

Is YAML replacing the User Interface based build configuration in Developer Cloud?

No, we aren’t replacing the existing UI-based build configuration in Developer Cloud with YAML. In fact, YAML-based build configuration is an alternative to it. Both configuration methods will co-exist going forward.

Are YAML and User Interface-based build configurations interchangeable in Developer Cloud?

No, not at the moment. What this means for the user is that a build job configuration created as a YAML file will always exist as and can only be edited as a YAML file. A build job created or defined through the user interface will not be available as a YAML file for editing.

Now let’s move on to the fun part, scripting our first YAML-based build job configuration to build and push a Docker container to Docker registry.

 

Set Up the Git Repository for a YAML-Based Build

To start, create a Git repository in your Developer Cloud project and then create a .ci-build folder in that repository. This is where the YAML build configuration file will reside. For this blog, I named the Git repository NodeJSDocker, but you can name it whatever you want.

In the Project Home page, under the Repositories tab, click the +Create button to create a new repository.

 

Enter the repository Name and a Description, leave the default values for everything else, and click the Create button.

 

 

In the NodeJSDocker Git repository root, use the +File button and create three new files: Main.js, package.json, and Dockerfile.  Take a look at my NodeJS Microservice for Docker blog for the code snippets that are required for these files.

Your Git repository should look like this.

 

Create a YAML file in the .ci-build folder in the Git repository. The .ci-build folder should always be in the root of the repository.

In the file name field, enter .ci-build/my_first_yaml_build.yml, where .ci-build is the folder and my_first_yaml_build.yml is the YAML file that defines the build job configuration. Then add the code snippet below and click the Commit button.

Notice that the structure of the YAML file is very similar to the tabs for the Build Job configuration. The root mapping in the build job configuration YAML is “job”, which consists of “name”, “vm-template”, “git”, “steps”, and “settings”. The following list describes each of these:

  • name”: Identifies the name of the build job and must be unique within the project.
  • vm-template”: Identifies the VM template that is used for building this job.
  • git”: Defines the Oracle Developer Git repository url, branch, and repo-name.
  • steps”:  Defines the build steps. In YAML, we support all the same build steps as we support in a UI-based build job.

 

In the code snippet below, we define the build configuration to build and push the Docker container to DockerHub registry. To do this, we need to include the Docker Login, Docker Build, and Docker Push build steps in the steps mapping.

Note:

For the Docker Login step, you’ll need to include your password. However, storing your password in plain text in a readable file, such as in a YAML file, is definitely not a good idea. The solution is to use the named password functionality in Oracle Developer Cloud.

To define a named password for the Docker registry, we’ll to click Project Administration tab in the left navigation bar and then the Build tile, as shown below.

 

In the Named Password section, click the +Create button.

 

Enter the Name and the Password for the Named Password. You’ll refer to it in the build job. Click the Create button and it will be stored.

You’ll be able to refer this Named Password in the YAML build job configuration by using #{DOCKER_HUB}.

 

docker-build: Under source, put DOCKERFILE and, if the Dockerfile does not reside in the root of the Git repository, include the mapping that defines the path to it. Enter the image-name (required) and version-tag information.

docker-push: You do not need the registry-host entry if you plan to use DockerHub or Quay.io. Otherwise, provide the registry host. Enter the image-name (required) and version-tag information.

**Similarly for docker-login, You do not need the registry-host entry if you plan to use DockerHub or Quay.io

job: name: MyFirstYAMLJob vm-template: Docker git: - url: "https://alex.admin@devinstance4wd8us2-wd4devcs8us2.uscom-central-1.oraclecloud.com/devinstance4wd8us2-wd4devcs8us2/s/devinstance4wd8us2-wd4devcs8us2_featuredemo_8401/scm/NodeJSDocker.git" branch: master repo-name: origin steps: - docker-login: username: "abhinavshroff" # required password: "#{DOCKER_HUB}" # required - docker-build: source: "DOCKERFILE" image: image-name: "abhinavshroff/nodejsmicroservice" # required version-tag: "latest" - docker-push: image: image-name: "abhinavshroff/nodejsmicroservice" # required version-tag: "latest" settings: - discard-old: days-to-keep-build: 5 builds-to-keep: 10 days-to-keep-artifacts: 5 artifacts-to-keep: 10

Right after you commit the YAML file in the .ci-build folder of the repository, a job named MyFirstYAMLJob will be created in the Builds tab. Notice that the name of the job that is created matches the name of the job you defined in the my_first_yaml_build.yml file.

Click the MyFirstYAMLJob link and then, on the Builds page, click the Configure button. The Git tab will open, showing the my_first_yaml_build.yml file in the .ci-build folder of the NodeJSDocker.git repository. Click the Edit File button and edit the YAML file.

 

After you finish editing and commit the changes, return to the Builds tab and click the Build Job link. Then click the Build Now button.

 

When the build job executes, it builds the Docker image and then pushes it to DockerHub.

You’ll also be able to create and configure pipelines using YAML. To learn more about creating and configuring build jobs and pipelines using YAML, see the documentation link.

To learn more about other new features in Oracle Developer Cloud, take a look at the What's New in Oracle Developer Cloud Service document and explore the links it provides to our product documentation. If you have any questions, you can reach us on the Developer Cloud Slack channel or in the online forum.

Happy Coding!

You Won’t Believe What Today’s Biggest Security Threat Is

Chris Warticki - Thu, 2019-05-16 15:09
CXOTALK with Praxair’s Earl Newsome

We often think of cyber security as solely the function of the IT department. But the truth is that security is the responsibility of each and every one of us—as we discover how to navigate an increasingly digital, mobile, and social infoscape. IT security has broken out of the realm of the technical experts and is integrating into everyone's job and day-to-day life.

In this CXOTalk video, Earl Newsome, Praxair Vice President and Chief Information Officer, sits down with industry analyst and CXOTALK host, Michael Krigsman, to discuss the importance of IT security across an entire organization.

People are the Biggest Threat to Security

The fact remains that traditional technical solutions—firewalls, permissions, encryption—help prevent data loss and theft. However, modern attacks now occur on multiple levels and are no longer completely technical. Cybercriminals are now using your own employees against your security defenses—without them even knowing.

The unfortunate truth is that people represent the biggest threat to corporate security. “Most of the issues that happen in security happen on two legs, not on two wires. [First] we need to do what's necessary on the two wires side—technology—and make sure that we have the right monitoring, detection, and patching capabilities put in place,” says Newsome.

Security is Everyone’s Responsibility

However, security doesn’t stop there. Training, awareness, and preparation must be an integral part of your security program in order to prevent security threats from within. “Security is not just the job of IT or our vendors, it’s actually the job of the board, it’s the job of our employees–security is everyone’s job,” says Newsome.

Your organization could be armed with the most impenetrable technical security defense—but at the end of the day, your organization is only as secure as your employees.

Watch the CXOTALK video to discover how to ensure security is implemented at every level of your organization.

 

 

 

 

 

Earl Newsome

Vice President and CIO, Praxair

 

 

 

 

 

 

Resources:

ACE-Organized Meet-Ups: May 17-June 13, 2019

OTN TechBlog - Thu, 2019-05-16 05:00
The meet-ups below were organized by the folks in the photos. But those people will necessarily present the content. And in many cases the events consist of multiple sessions. For additional detail on each event please click the links provided.
 

Oracle ACE Christian PfundtnerChristian Pfundtner
CEO, DB Masters GmbH
Austria


Host Organization: DB Masters
Friday, May 17, 2019
MA01 - Veranstaltungszentrum 
1220 Vienna, Stadlauerstraße 56 
 

Oracle ACE Laurent LeturgezLaurent Leturgez
President/CTO, Premiseo
Lille, France


Host Organization: Paris Province Oracle Meetup
Monday, May 20, 2019
6:30pm - 8:30pm
Easyteam
39 Rue du Faubourg Roubaix
Lille, France
 

Oracle ACE Associate Mathias MagnussonMathias Magnusson
CEO, Evil Ape
Nacka, Sweden

 
Host Organization: Stockholm Oracle
Thursday, May 23, 2019
6:00pm - 8:00pm
(See link for location details)
 

Oracle ACE Ahmed AboulnagaAhmed Aboulnaga
Principal, Attain
Washington D.C.

 
Host Organization: Oracle Fusion Middleware & Oracle PaaS of NOVA
Tuesday, May 28, 2019
4:00pm - 6:00pm
Reston Regional Library
11925 Bowman Towne Dr.
Reston, VA
 

Oracle ACE Richard MartensRichard Martens
Co-Owner, SMART4Solutions B.V.
Tilburg, Netherlands

 
Host Organization: ORCLAPEX-NL
Wednesday, May 29, 2019
5:30pm - 9:30pm
Oracle Netherlands
Hertogswetering 163-167,
Utrecht, Netherlands
 

Oracle ACE Associate Jose RodriguesJosé Rodrigues
Business Manager for BPM & WebCenter, Link Consulting
Lisbon, Portugal

 
Host Organization: Oracle Developer Meetup Lisbon
Thursday, May 30, 2019
6:30pm - 8:30pm
Auditorio Link Consulting
Avenida Duque Ávila, 23
Lisboa
 

Oracle ACE Director Rita NunezRita Nuñez
Consultora IT Sr, Tecnix Solutions
Argentina

 
Host Organization: Oracle Users Group of Argentina (AROUG)
Thursday, June 13, 2019
Aula Magna UTN.BA - Medrano 951
 
Additional Resources

Indexing Null Values - Part 1

Randolf Geist - Wed, 2019-05-15 17:04
Indexing null values in Oracle is something that has been written about a lot in the past already. Nowadays it should be common knowledge that Oracle B*Tree indexes don't index entries that are entirely null, but it's possible to include null values in B*Tree indexes when combining them with something guaranteed to be non-null, be it another column or simply a constant expression.

Jonathan Lewis not too long ago published a note that showed an oddity when dealing with IS NULL predicates that in the end turned out not to be a real threat and looked more like an oddity how Oracle displays the access and filter predicates when accessing an index and using IS NULL together with other predicates following after.

However, I've recently come across a rather similar case where this display oddity turns into a real threat. To get things started, let's have a look at the following (this is from 18.3.0, but other recent versions should show similar results):

SQL> create table null_index as select * from dba_tables;

Table created.

SQL> insert /*+ append */ into null_index select a.* from null_index a, (select /*+ no_merge cardinality(100) */ rownum as dup from dual connect by level <= 100);

214700 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats(null, 'NULL_INDEX', method_opt => 'for all columns size auto for columns size 254 owner')

PL/SQL procedure successfully completed.

SQL> create index null_index_idx on null_index (pct_free, ' ');

Index created.

SQL> set serveroutput off pagesize 5000 arraysize 500

Session altered.

SQL> set autotrace traceonly
SQL>
SQL> select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

101 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 3608178030

------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 19 | 5852 | 1028 (1)| 00:00:01 |
|* 1 | TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX | 19 | 5852 | 1028 (1)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | NULL_INDEX_IDX | 13433 | | 32 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST')
2 - access("PCT_FREE" IS NULL)


Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
2178 consistent gets
35 physical reads
0 redo size
7199 bytes sent via SQL*Net to client
372 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
101 rows processed

So this is the known approach of indexing null values by simply adding a constant expression and we can see from the execution plan that indeed the index was used to identify the rows having NULLs.

But we can also see from the execution plan, the number of consistent gets and also the Rowsource Statistics that this access can surely be further improved:

Plan hash value: 3608178030

--------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads |
--------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 1028 (100)| 101 |00:00:00.01 | 2178 | 35 |
|* 1 | TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX | 1 | 19 | 1028 (1)| 101 |00:00:00.01 | 2178 | 35 |
|* 2 | INDEX RANGE SCAN | NULL_INDEX_IDX | 1 | 13433 | 32 (0)| 13433 |00:00:00.01 | 30 | 35 |
--------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

1 - filter(("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST'))
2 - access("PCT_FREE" IS NULL)

Because the additional predicate on OWNER can only be applied on table level, we first identify more than 13,000 rows on index level, visit all those table rows via random access and apply the filter to end up with the final 101 rows.

So obviously we should add OWNER to the index to avoid visiting that many table rows:

SQL> create index null_index_idx2 on null_index (pct_free, owner);

Index created.

SQL> select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

101 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 3808602675

-------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 19 | 5852 | 40 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX | 19 | 5852 | 40 (0)| 00:00:01 |
|* 2 | INDEX RANGE SCAN | NULL_INDEX_IDX2 | 19 | | 38 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("PCT_FREE" IS NULL)
filter("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST')



Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
137 consistent gets
61 physical reads
0 redo size
33646 bytes sent via SQL*Net to client
372 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
101 rows processed

Plan hash value: 3808602675

---------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads |
---------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 40 (100)| 101 |00:00:00.01 | 137 | 61 |
| 1 | TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX | 1 | 19 | 40 (0)| 101 |00:00:00.01 | 137 | 61 |
|* 2 | INDEX RANGE SCAN | NULL_INDEX_IDX2 | 1 | 19 | 38 (0)| 101 |00:00:00.01 | 36 | 61 |
---------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

2 - access("PCT_FREE" IS NULL)
filter(("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST'))

So at first sight this looks indeed like an improvement, and it is compared to the previous execution plan, see for example how the number of consistent gets has been reduced. However, there is something odd going on: The index cost part is even greater than in the previous example, and looking more closely at the predicate information section it becomes obvious that the additional predicate on OWNER isn't applied as access predicate to the index, but only as filter. This means rather than directly identifying the relevant parts of the index by navigating the index structure efficiently using both predicates, only the PCT_FREE IS NULL expression gets used to identify the more than 13,000 corresponding index entries and then applying the filter on OWNER afterwards. While this is better than applying the filter on table level, it still can become a very costly operation and the question here is, why doesn't Oracle use both expressions to access the index? The answer to me looks like an implementation restriction - I don't see any technical reason why Oracle shouldn't be capable of doing so. Currently it looks like that in this particular case when using an IN predicate or the equivalent OR predicates following an IS NULL on index level gets only applied as filter, similar to predicates following range or unequal comparisons, or skipping columns / expressions in a composite index. But for those cases there is a reason why Oracle does so - it no longer can use the sorted index entries for efficient access, but I don't see why this should apply to this IS NULL case - and Jonathan's note above shows that in principle for other kinds of predicates it works as expected (except the oddity discussed).

This example highlights another oddity: Since it contains an IN list, ideally we would like to see an INLIST ITERATOR used as part of the execution plan, but there is only an INDEX RANGE SCAN operation using this FILTER expression.

By changing the order of the index expressions and having the expression used for the IS NULL predicate as trailing one, we can see the following:

SQL> create index null_index_idx3 on null_index (owner, pct_free);

Index created.

SQL> select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

101 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 2178707950

--------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 19 | 5852 | 6 (0)| 00:00:01 |
| 1 | INLIST ITERATOR | | | | | |
| 2 | TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX | 19 | 5852 | 6 (0)| 00:00:01 |
|* 3 | INDEX RANGE SCAN | NULL_INDEX_IDX3 | 19 | | 4 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access(("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST') AND "PCT_FREE" IS NULL)


Statistics
----------------------------------------------------------
1 recursive calls
0 db block gets
108 consistent gets
31 physical reads
0 redo size
33646 bytes sent via SQL*Net to client
372 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
101 rows processed

Plan hash value: 2178707950

----------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | Cost (%CPU)| A-Rows | A-Time | Buffers | Reads |
----------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 6 (100)| 101 |00:00:00.01 | 108 | 31 |
| 1 | INLIST ITERATOR | | 1 | | | 101 |00:00:00.01 | 108 | 31 |
| 2 | TABLE ACCESS BY INDEX ROWID BATCHED| NULL_INDEX | 2 | 19 | 6 (0)| 101 |00:00:00.01 | 108 | 31 |
|* 3 | INDEX RANGE SCAN | NULL_INDEX_IDX3 | 2 | 19 | 4 (0)| 101 |00:00:00.01 | 7 | 31 |
----------------------------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

3 - access((("OWNER"='AUDSYS' OR "OWNER"='CBO_TEST')) AND "PCT_FREE" IS NULL)

So this is the expected execution plan, including an INLIST ITERATOR and showing that all predicate expressions get used to access the index efficiently, reducing the number of consistent gets further. Of course, a potential downside here is that this index might not be appropriate if queries are looking for PCT_FREE IS NULL only.

Summary

It looks like that IN / OR predicates following an IS NULL comparison on index level are only applied as filters and therefore also prevent other efficient operations like inlist iterators. The problem in principle can be worked around by putting the IS NULL expression at the end of a composite index, but that could come at the price of requiring an additional index on the IS NULL expression when there might be the need for searching just for that expression efficiently.

In part 2 for curiosity I'll have a look at what happens when applying the same to Bitmap indexes, which include NULL values anyway...

Script used:

set echo on

drop table null_index purge;

create table null_index as select * from dba_tables;

insert /*+ append */ into null_index select a.* from null_index a, (select /*+ no_merge cardinality(100) */ rownum as dup from dual connect by level <= 100);

commit;

exec dbms_stats.gather_table_stats(null, 'NULL_INDEX', method_opt => 'for all columns size auto for columns size 254 owner')

create index null_index_idx on null_index (pct_free, ' ');

set serveroutput off pagesize 5000 arraysize 500

set autotrace traceonly

select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

create index null_index_idx2 on null_index (pct_free, owner);

select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

create index null_index_idx3 on null_index (owner, pct_free);

select * from null_index where pct_free is null and owner in ('AUDSYS', 'CBO_TEST');

Podcast: Do Bloody Anything: The Changing Role of the DBA

OTN TechBlog - Wed, 2019-05-15 05:00

In August of 2018 we did a program entitled Developer Evolution: What’s Rocking Roles in IT. That program focused primarily on the forces that are reshaping the role of the software developer. In this program we shift the focus to the DBA -- the Database Administrator -- and the evolve-or-perish choices that face those in that role.

Bringing their insight to the discussion is an international panel of experts who represent years of DBA experience, and some of the forces that are transforming that role.

The Panelists

In alphabetical order

Maria ColganMaria Colgan
Master Product Manager, Oracle Database
San Francisco, California


 “Security, especially as people move more towards cloud-based models, is something DBAs should get a deeper knowledge in.”

 

Oracle ACE Director Julian DontcheffJulian Dontcheff
Managing Director/Master Technology Architect, Accenture
Helsinki, Finland

 

"Now that Autonomous Database is here, I see several database administrators being scared that somehow all their routine tasks will be replaced and they will have very little to do. As if doing the routine stuff is the biggest joy in their lives."

 

Oracle ACE Director Tim HallTim Hall
DBA, Developer, Author, and Trainer
Birmingham, United Kingdom


 “I never want to do something twice if I can help it. I want to find a way of automating it. If the database will do that for me, that’s awesome.”

 

Oracle ACE Director Lucas JellemaLucas Jellema
CTO/Consulting IT Architect, AMIS
Rotterdam,Netherlands


 “By taking heed of what architects are coming up with, and how applications and application landscapes are organized and how the data plays a part in that, I think DBAs can prepare themselves and play a part in putting it all together in a meaninful way.”

 

Oracle ACE Director Brendan TierneyBrendan Tierney
Principal Consultant, Oralytics
Dublin, Ireland


"Look beyond what you're doing in your cubicles with your blinkers on. See what's going on across all IT departments. What are the business needs? How is data being used? Where can you contribute to that to deliver better business value?"

 

Gerald VenzlGerald Venzl
Master Product Manager, Oracle Cloud, Database, and Server Technologies
San Francisco, California

 

"When you talk to anybody outside the administrative roles -- DBA or Unix Admin -- they will tell you that those people are essentially the folks that always say no. That's not very productive."

 

Additional Resources

ORA-15067: command or option incompatible with diskgroup redundancy

VitalSoftTech - Wed, 2019-05-15 00:07
When dropping an online disk from a diskgroup why is ORA-15067 error returned?
Categories: DBA Blogs

Intel Processor MDS Vulnerabilities: CVE-2019-11091, CVE-2018-12126, CVE-2018-12130, and ...

Oracle Security Team - Tue, 2019-05-14 11:59

Today, Intel disclosed a new set of speculative execution side channel vulnerabilities, collectively referred as “Microarchitectural Data Sampling” (MDS).  These vulnerabilities affect a number of Intel processors and have received four distinct CVE identifiers to reflect how they impact the different microarchitectural structures of the affected Intel processors:

  • CVE-2019-11091: Microarchitectural Data Sampling Uncacheable Memory (MDSUM)
  • CVE-2018-12126: Microarchitectural Store Buffer Data Sampling (MSBDS) 
  • CVE-2018-12127: Microarchitectural Load Port Data Sampling (MLPDS)
  • CVE-2018-12130: Microarchitectural Fill Buffer Data Sampling (MFBDS)

While vulnerability CVE-2019-11091 has received a CVSS Base Score of 3.8, the other vulnerabilities have all been rated with a CVSS Base Score of 6.5.   As a result of the flaw in the architecture of these processors, an attacker who can execute malicious code locally on an affected system can compromise the confidentiality of data previously handled on the same thread or compromise the confidentiality of data from other hyperthreads on the same processor as the thread where the malicious code executes.  As a result, MDS vulnerabilities are not directly exploitable against servers that do not allow the execution of untrusted code.

These vulnerabilities are collectively referred as Microarchitectural Data Sampling issues (MDS issues) because they refer to issues related to microarchitectural structures of the Intel processors other than the level 1 data cache.  The affected microarchitectural structures in the affected Intel processors are the Data Sampling Uncacheable Memory (uncacheable memory on some microprocessors utilizing speculative execution), the store buffers (temporary buffers to hold store addresses and data), the fill buffers (temporary buffers between CPU caches), and the load ports (temporary buffers used when loading data into registers).  MDS issues are therefore distinct from the previously-disclosed Rogue Data Cache Load (RDCL) and L1 Terminal Fault (L1TF) issues.

Effectively mitigating these MDS vulnerabilities will require updates to Operating Systems and Virtualization software in addition to updated Intel CPU microcode. 

While Oracle has not yet received reports of successful exploitation of these issues “in the wild,” Oracle has worked with Intel and other industry partners to develop technical mitigations against these issues.

In response to these MDS issues:

Oracle Hardware:

  • Oracle recommends that administrators of x86-based Systems carefully assess the impact of the MDS flaws for their systems and implement the appropriate security mitigations.  Oracle will provide specific guidance for Oracle Engineered Systems.
  • Oracle has determined that Oracle SPARC servers are not affected by these MDS vulnerabilities.

Oracle Operating Systems (Linux and Solaris) and Virtualization:

  • Oracle has released security patches for Oracle Linux 7, Oracle Linux 6 and Oracle VM Server for X86 products.  In addition to OS patches, customers should run the current version of the Intel microcode to mitigate these issues. In certain instances, Oracle Linux customers can take advantage of Oracle Ksplice to apply these updates without needing to reboot their systems.
  • Oracle has determined that Oracle Solaris on x86 is affected by these vulnerabilities.  Customers should refer to Doc ID 2540621.1  for additional information.
  • Oracle has determined that Oracle Solaris on SPARC is not affected by these MDS vulnerabilities.

Oracle Cloud:

  • The Oracle Cloud Security and DevOps teams continue to work in collaboration with our industry partners on implementing mitigations for these MDS vulnerabilities that are designed to protect customer instances and data across all Oracle Cloud offerings: Oracle Cloud (IaaS, PaaS, SaaS), Oracle NetSuite, Oracle GBU Cloud Services, Oracle Data Cloud, and Oracle Managed Cloud Services. 
  • Oracle will inform Cloud customers using the normal maintenance notification mechanisms about required maintenance activities as additional mitigating controls continue to be implemented in response to the MDS vulnerabilities.
  • Oracle has determined that the MDS vulnerabilities will not impact a number of Oracle's cloud services.  They include Autonomous Data Warehouse service, which provides a fully managed database optimized for running data warehouse workloads, and Oracle Autonomous Transaction Processing service, which provides a fully managed database service optimized for running online transaction processing and mixed database workloads.  No further action is required by customers of these services as both were found to require no additional mitigating controls based on service design to prevent the exploitation of the MDS vulnerabilities.  
  • Bare metal instances in Oracle Cloud Infrastructure (OCI) Compute offer full control of a physical server and require no additional Oracle code to run.  By design, the bare metal instances are isolated from other customer instances on the OCI network whether they be virtual machines or bare metal.  However, for customers running their own virtualization stack on bare metal instances, the MDS vulnerability could allow a virtual machine to access privileged information from the underlying hypervisor or other VMs on the same bare metal instance.  These customers should review the Intel recommendations about these MDS vulnerabilities and make the recommended changes to their configurations.

As previously anticipated, we continue to expect that new techniques leveraging speculative execution flaws in processors will continue to be disclosed.  These issues are likely to continue to impact primarily operating systems and virtualization platforms and addressing these issues will likely continue to require software update and microcode update.  Oracle therefore recommends that customers remain on current security release levels, including firmware, and applicable microcode updates (delivered as Firmware or OS patches), as well as software upgrades.

 

For more information:

Oracle Linux customers can refer to the bulletins located at https://linux.oracle.com/cve/CVE-2019-11091.html, https://linux.oracle.com/cve/CVE-2018-12126.html, https://linux.oracle.com/cve/CVE-2018-12130.html, https://linux.oracle.com/cve/CVE-2018-12127.html

For information about the availability of Intel microcode for Oracle hardware, see Intel MDS vulnerabilities (CVE-2019-11091, CVE-2018-12126, CVE-2018-12130, and CVE-2018-12127: Intel Processor Microcode Availability (Doc ID 2540606.1) and Intel MDS (CVE-2019-11091, CVE-2018-12126, CVE-2018-12130 and CVE-2018-12127) Vulnerabilities in Oracle x86 Servers (Doc ID 2540621.1)

Oracle Solaris customers should refer to Intel MDS Vulnerabilities (CVE-2019-11091, CVE-2018-12126, CVE-2018-12130, and CVE-2018-12127): Oracle Solaris Impact (Doc ID 2540522.1)

Oracle Cloud Infrastructure (OCI) customers should refer to https://docs.cloud.oracle.com/iaas/Content/Security/Reference/MDS_response.htm 

 

Pages

Subscribe to Oracle FAQ aggregator