Feed aggregator

Error (CLSD|CLSU-00100|CLSU-00103: error location: sclsdgcwd2|CLSD00183) Running ggsci

Michael Dinh - Fri, 2019-02-15 18:25

Rant: Any application requiring strace for a simple problem to determine root cause is poorly written.

Oracle blog – Amardeep Sidhu January 12, 2019 Error while running ggsci

The blog above was a great help.

$ ./ggsci 

Oracle GoldenGate Command Interpreter for Oracle
Version OGGCORE_12.
Linux, x64, 64bit (optimized), Oracle 12c on Oct 30 2017 20:49:22
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2017, Oracle and/or its affiliates. All rights reserved.

2019-02-15 18:04:14.827 
CLSD: An error occurred while attempting to generate a full name. Logging may not be active for this process
Additional diagnostics: CLSU-00100: operating system function: sclsdgcwd failed with error data: -1
CLSU-00103: error location: sclsdgcwd2

Results from strace.

strace ./ggsci 
mkdir("/u01/app/oracle/product/12.1.0/client_2/log", 01777) = -1 EACCES (Permission denied)

I did take a different path for resolution.

gid is different for ggsuser and oracle

uid=1521(ggsuser) gid=1500(dba)      groups=1500(dba),1501(oinstall)
uid=1500(oracle)  gid=1501(oinstall) groups=1501(oinstall),1500(dba)

As root, chmod 775 -R /u01/app resolved the issue.

# cd /u01/
# chmod 775 -R app/

However, this does not explain why it was working before adding GoldenGate to CRS.

# agctl add goldengate GoldenGate_instance \
--instance_type target \
--oracle_home /u01/app/oracle/product/12.1.0/client_2 \
--nodes node1,node2 \
--network 1 --ip \
--user ggsuser \
--group dba \
--filesystems ora.acfs_data.acfs_vol.acfs \
--gg_home /u03/gg/12.2.0

Lateral view decorrelation(VW_DCL) causes wrong results with rownum

XTended Oracle SQL - Fri, 2019-02-15 17:45

Everyone knows that rownum in inline views blocks many query transformations, for example pushing/pulling predicates, scalar subquery unnesting, etc, and many people use it for such purposes as a workaround to avoid unwanted transformations(or even CBO bugs).

Obviously, the main reason of that is different calculation of rownum:

If we pull the predicate “column_value = 3” from the following query to higher level

select * 
from (select * from table(odcinumberlist(1,1,1,2,2,2,3,3,3)) order by 1)
where rownum <= 2
  and column_value = 3;


we will get different results:

select * 
from (select *
      from (select * from table(odcinumberlist(1,1,1,2,2,2,3,3,3)) order by 1)
      where rownum <= 2
where column_value = 3;

no rows selected

Doc ID 62340.1


But we recently encountered a bug with it: lateral view with ROWNUM returns wrong results in case of lateral view decorrelation.
Compare results of this query with and without no_decorrelation hint:

 t1(a) as (select * from table(odcinumberlist(1,3)))
,t2(b) as (select * from table(odcinumberlist(1,1,3,3)))
,t(id) as (select * from table(odcinumberlist(1,2,3)))
from t,
     lateral(select/*+ no_decorrelate */ rownum rn 
             from t1,t2 
             where t1.a=t2.b and t1.a = t.id
order by 1,2;

        ID         RN
---------- ----------
         1          1
         1          2
         3          1
         3          2
 t1(a) as (select * from table(odcinumberlist(1,3)))
,t2(b) as (select * from table(odcinumberlist(1,1,3,3)))
,t(id) as (select * from table(odcinumberlist(1,2,3)))
from t,
     lateral(select rownum rn 
             from t1,t2 
             where t1.a=t2.b and t1.a = t.id
order by 1,2;

        ID         RN
---------- ----------
         1          1
         1          2
         3          3
         3          4

Of course, we can draw conclusions even from these results: we can see that in case of decorrelation(query with hint) rownum was calculated before the join. But to be sure we can check optimizer’s trace 10053:

Final query after transformations:

******* UNPARSED QUERY IS *******
SELECT VALUE(KOKBF$2) "ID", "VW_DCL_76980902"."RN" "RN"
               TABLE("ODCINUMBERLIST"(1, 1, 3, 3)) "KOKBF$1"
        ) "VW_DCL_76980902"
 WHERE "VW_DCL_76980902"."ITEM_3"(+) = VALUE(KOKBF$2)
 ORDER BY VALUE(KOKBF$2), "VW_DCL_76980902"."RN"



I’ll modify it a bit just to make it more readable:
we can see that

from t,
     lateral(select rownum rn 
             from t1,t2 
             where t1.a=t2.b and t1.a = t.id)(+)
order by 1,2;

was transformed to

  t.id, dcl.rn
from t,
     (select rownum rn 
      from t1,t2 
      where t1.a=t2.b) dcl
where dcl.a(+) = t.id
order by 1,2;

And it confirms that rownum was calculated on the different dataset (t1-t2 join) without join filter by table t.
I created SR with Severity 1 (SR #3-19117219271) more than a month ago, but unfortunately Oracle development do not want to fix this bug and moreover they say that is not a bug. So I think this is a dangerous precedent and probably soon we will not be able to be sure in the calculation of rownum and old fixes…

Categories: Development

HEUG Alliance 2019

Jim Marion - Fri, 2019-02-15 13:50
With the HEUG Alliance 2019 conference starting in a few weeks, it is time to finalize our session schedules. Reviewing the agenda, I see many great education sessions from partners such as Presence of IT, SpearMCAppsian, and Mutara Inc as well as many, many customer sessions covering important topics including security, user experience, integration, tools, add-on products and so on. This is clearly an Alliance we don't want to miss! On Monday I will be presenting new PeopleTools Tips and Techniques and then on Wednesday, I am leading the workshop PeopleSoft Fluid: Zero to Hero in an Afternoon. Session details:
I look forward to seeing you at Alliance 2019!

Now Available: Oracle Management Cloud for E-Business Suite

Steven Chan - Fri, 2019-02-15 12:02

On behalf the our colleagues in the Oracle Managed Cloud development team, we are pleased to announce the release of Oracle Management Cloud (OMC) for Oracle E-Business Suite for end-to-end, cloud-based monitoring.

OMC offers a simple, single-click discovery interface that you can use to obtain a fast, strategic look at the entire application environment:

All Oracle E-Business Suite administrators will benefit from OMC's detailed health and performance views of key EBS components such as:

  • Concurrent Processing
  • Forms Services
  • Workflow
  • WebLogic Server
  • Database

Oracle Management Cloud utilizes machine learning to track and provide alerts on key performance indicators such as concurrent request volume and runtime characteristics, and pending Forms sessions.

In addition to these various EBS components, OMC can also monitor a wide range of infrastructure hardware such as load balancers, switches, firewalls, and storage arrays.

References Related Articles
Categories: APPS Blogs

New search function for Oracle database features

Yann Neuhaus - Fri, 2019-02-15 10:25

Oracle released a new way to search for database features, and it is …

..really great.
Try out the cool new search application for Oracle database features.

What is special about? The new site gives you a very smart overview of the database features and in which version they are available (at least from 11-19)

As example, when you search for “compression” it shows you this output:

With a click on the feature you are interessted in you can see a short description and jump directly into the documentation

Oh and when somebody from Oracle is reading this:
Please add also the license information for all the features: Thank you :-)

Playing around with the new site, I already found some features I did not know that they exist.
Or did you know, there is an ASM Cloning feature starting with Oracle 18c and Flex Disk groups?

Have a nice weekend.

Thanks to
Thomas Teske
from Oracle who showed me this feature!

Cet article New search function for Oracle database features est apparu en premier sur Blog dbi services.

Efficient way to fetch data using rownumber from a billion record table??

Tom Kyte - Fri, 2019-02-15 10:06
I'm trying to fetch required data from a billion record table using the row number approch (i.e, select "ingest_ts_utc" from (SELECT to_char(sys_extract_utc(systimestamp), 'YYYY-MM-DD HH24:MI:SS.FF') as "ingest_ts_utc" ,ROWNUM as rno from XYZ.ABC ) ...
Categories: DBA Blogs

using sql loader for a table where you only have insert privileges via a non default role

Tom Kyte - Fri, 2019-02-15 10:06
TeamTOM, I'm trying to use Sql Loader to load a text file into a normal Oracle table (as opposed to an external one). Unfortunately I only have insert rights to the table via a non default role. If I were just doing a simple insert statement in sq...
Categories: DBA Blogs

ADF Performance Improvement with Nginx Compression

Andrejus Baranovski - Fri, 2019-02-15 08:54
We are using Nginx web server for Oracle ADF WorkBetter hosted demo hosted on DigitalOcean cloud server. Nginx helps to serve web application content fast and offer improved performance. One of the important tuning options - content compression, Nginx does this job well and is simple to setup.

Content compression doesn't provide direct runtime performance, a browser would run the same code, doesn't matter it was compressed or not. But it brings improved perceived performance (which is very important), network time is way faster, because of reduced content size. Oracle ADF is a server-side framework, each request would bring content from the server - faster this content comes, means better application performance.

1. Content Compression = OFF

Let see stats, when no content compression applied (using our Oracle ADF WorkBetter hosted demo).

Page load size is 2.69 MB transferred. Finish time 1.55 s:

Navigation to the employee section generates 165.76 KB and finish time 924 ms:

Navigation to employee compensation generates 46.19 KB and finish time 494 ms:

2. Nginx compression

Compression is simple to setup in Nginx. Gzip settings are set in nginx.conf, make sure to list all content types which must be supported for compression. Restart nginx process after new settings are saved in nginx.conf:

3. Content Compression = ON

Page load size is 733.84 KB transferred. Finish time 1.48 s:

Navigation to the employee section generates 72.75 KB and finish time 917 ms:

Navigation to employee compensation generates 7.59 KB and finish time 498 ms:

[BLOG] Oracle GoldenGate 12c Overview & Components

Online Apps DBA - Fri, 2019-02-15 04:24

Are you a Beginner who has just started learning Oracle GoldenGate? If yes, then visit: https://k21academy.com/goldengate11 and learn: ✔What is Extract Process and how it can be configured ✔What is Replicat and Collector ✔All other different components of GoldenGate and much more… Are you a Beginner who has just started learning Oracle GoldenGate? If yes, […]

The post [BLOG] Oracle GoldenGate 12c Overview & Components appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

A few scripting languages for Documentum

Yann Neuhaus - Fri, 2019-02-15 03:25

Beside the obsolete dmbasic, the autistic dmawk, the verbose java with the DfCs, the limited iapi (for API) and idql (for DQL) command-line tools, Documentum does not offer any scripting language for the administrator and the out-of-the-box experience is quite frustrating in this respect. It has been so even before the java trend so it is not a maneuver to force the use of the DfCs or to rely on it for administrative tasks. It looks more like an oversight or like this was considered as a low-priority need.
Of course, people didn’t stop at this situation and developed their own bindings with their preferred scripting language. I found db:Documentum for perl, yours truly’s DctmAPI.py for python (refer to the article here), dmgawk for gawk (see here), and of course all the JVM-based languages that leverage the DfCs such as groovy, beanshell, jython, jruby, etc… Such JVM-based scripting languages actually only need to import the DfCs library and off they go for the next killer script. In this article, I’ll show how to set up the binding for a few of those languages under the linux O/S.


This is a perl v5 module that permits access to the Documentum api from the perl interpreter. It was developed by M. Scott Roth, see here, originally only for the Windows O/S and EDMS v3.1.5. Thanks to other contributors, it is now compilable under several flavors of Unices, including Linux. It is downloadable from here.
You’ll need the GNU C compiler to generate the module. Here is a detailed, step by step description of the installation procedure.

# download the archive Db-Documentum-1.64.tar.gz from here http://www.perl.com/CPAN/modules/by-module/Db/
# decompress it in, say, db::Documentum
tar -zxvf Db-Documentum-1.64.tar.gz
# move to the newly created directory Db-Documentum-1.64;
cd Db-Documentum-1.64
# prepare the following needed paths;
# path to the Documentum installation, e.g. /home/dmadmin/documentum
# path to the Documentum libdmcl.so library, e.g. ${DM_HOME}/product/7.3/bin
# note: there is also the obsolescent libdmcl40.so but I've encountered problems with it, mostly "Segmentation fault (core dumped)" crashes, so use the JNI-based libdmcl.so instead; it starts more slowly as it uses java but it is more reliable and is still supported;
# path to the include file dmapp.h, e.g. ${DM_HOME}/share/sdk/include
# edit the linux section in Makefile.PL and provide the above paths;
# also, move up the $DM_LIB initialization before the dmcl.so test and comment the line beginning with $DM_CLIENT_LIBS =
# here is how that section looks like after editing it:

elsif ( $OS =~ /linux/i ) {
# The path to your Documentum client installation.
$DM_HOME = '/home/dmadmin/documentum';
# This is kinda a gottcha, the Linux stuff is in unix/linux
# You may have to tweak these.
# Path to documentum client libraries.
#$DM_LIB = "$DM_HOME/unix/linux";
$DM_LIB = "$DM_HOME/product/7.3/bin";
# dmcl.so file
if (! -e "$DM_LIB/libdmcl.so") {
warn "\n*** WARNING *** Could not find $DM_LIB/libdmcl.so.\nThe module will not make without " .
# Path to directory where dmapp.h lives.
#$DM_INCLUDE = "-I/documentum/share/sdk/include/";
$DM_INCLUDE = "-I$DM_HOME/share/sdk/include/";
#$DM_CLIENT_LIBS = "-L$DM_LIB -ldmapi -ldmupper -ldmlower -ldmcommon -ldmupper -lcompat";
$DM_RPC_LIBS = "-L$DM_LIB -lnwrpc -lnwstcp";
$OS_LIBS = "-lsocket -lnsl -lintl";
$CPP_LIBS = "-lC";
$LD_LIBS = "-ldl";
$CPP_INC = "";
$CCFLAGS = "";

# execute the Makefile.PL;
perl Makefile.PL
# if the error below occurs, you must install the perl-devel module using the native package deployment tool for your distribution,
# e.g. sudo yum install perl-devel for centos:
# Can't locate ExtUtils/MakeMaker.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at Makefile.PL line 1.
#BEGIN failed--compilation aborted at Makefile.PL line 1.
# a Makefile file has been generated; correct the 2 lines below as showed;
EXTRALIBS = -L/home/dmadmin/documentum/product/7.3/bin -ldmcl
LDLOADLIBS = -L/home/dmadmin/documentum/product/7.3/bin -ldmcl -lc
# use the newly produced Makefile;
# run some tests to check the new module;
make test
# the test completes successfully but, sometimes, it is followed by SIGSEGV in the JVM;
# as it occurs at program termination, it can be ignored;
# install the new perl module system-wide;
sudo make install

Now that we have the module, let’s use it in a simple test case: dump of all the dm_sysobject linked in cabinet /dmadmin (its id is 0c00c35080000105) in the out-of-the box dmtest repository.

vi cat my-test.pl 

use Db::Documentum qw(:all);
use Db::Documentum::Tools qw(:all);

# print version

$docbase = "dmtest";
$username = "dmadmin";
$password = "dmadmin";

# connect;
$result = dm_Connect($docbase, $username, $password) || die("could not connect in " . $docbase . " as " . $username . " with password " . $password);

# run the query;
$status = dmAPIExec("execquery,c,,select r_object_id, r_object_type, object_name from dm_sysobject where folder(ID('0c00c35080000105'))");
if (1 != $status) {
   $err_mess = dmAPIGet("getmessage,c");
   print $err_mess;
$query_id = dmAPIGet("getlastcoll,c");
printf "%-16s  %-20s  %s\n", "r_object_id", "r_object_type", "object_name";
while (dmAPIExec("next,c," . $query_id)) {
   $r_object_id = dmAPIGet("get,c," . $query_id . ",r_object_id");
   $r_object_type = dmAPIGet("get,c," . $query_id . ",r_object_type");
   $object_name = dmAPIGet("get,c," . $query_id . ",object_name");
   printf "%16s  %-20s  %s\n", $r_object_id, $r_object_type, $object_name;
dmAPIExec("close,c," . $query_id);

# disconnect;

The script is very trivial and needs little explanation. Note the new functions dm_Connect, dmAPIExec and dmAPIGet. dmAPISet, dmAPIInit and dmAPIDeInit are also available but the last two don’t need to be invoked explicitly for they are automatically at module load-time.
Example of execution:

perl my-test.pl
Perl version: 5.016003
Db::Documentum version: 1.64
DMCL version: 7.3.0000.0205
r_object_id r_object_type object_name
0800c3508000019b dm_job dm_PropagateClientRights
0800c3508000019c dm_job dm_PostUpgradeAction
0800c35080000408 dmc_wfsd_type_info integer
0800c35080000409 dmc_wfsd_type_info boolean
0800c3508000040a dmc_wfsd_type_info double
0800c3508000040b dmc_wfsd_type_info string
0800c3508000040c dmc_wfsd_type_info date
0800c3508000040d dmc_wfsd_type_info repeating integer
0800c3508000040e dmc_wfsd_type_info repeating boolean
0800c3508000040f dmc_wfsd_type_info repeating double
0800c35080000410 dmc_wfsd_type_info repeating string
0800c35080000411 dmc_wfsd_type_info repeating date
0800c35080000426 dm_sysobject dm_indexAgentAcquireLock
0800c35080000587 dm_client_rights dfc_localhost_c0XP4a
0800c35080001065 dm_jms_config JMS dmtest:9080 for dmtest.dmtest
0800c35080001066 dm_jms_config JMS localhost.localdomain:9080 for dmtest.dmtest
0b00c35080000233 dm_folder Workspace Customizations
1700c3508000015d dm_outputdevice Default LP Printer
3a00c3508000013f dm_location storage_01
3a00c35080000140 dm_location common
3a00c35080000141 dm_location events
3a00c35080000142 dm_location log
3a00c35080000143 dm_location config
3a00c35080000144 dm_location dm_dba
3a00c35080000145 dm_location auth_plugin
3a00c35080000146 dm_location ldapcertdb_loc
3a00c35080000147 dm_location temp
3a00c35080000148 dm_location dm_ca_store_fetch_location
3a00c35080000153 dm_location convert
3a00c35080000154 dm_location dsearch
3a00c35080000155 dm_location nls_chartrans
3a00c35080000156 dm_location check_signature
3a00c35080000157 dm_location validate_user
3a00c35080000158 dm_location assume_user
3a00c35080000159 dm_location secure_common_area_writer
3a00c3508000015a dm_location change_password_local
3a00c3508000015b dm_location thumbnail_storage_01
3a00c3508000015c dm_location streaming_storage_01
3a00c35080000226 dm_location replicate_location
3a00c35080000227 dm_location replica_storage_01
3e00c35080000149 dm_mount_point share
6700c35080000100 dm_plugin CSEC Plugin
6700c35080000101 dm_plugin Snaplock Connector

Now, the power of perl and its more than 25’000 modules are at our fingertips to help us tackle those hairy administrative tasks !


Being a JVM-based language, groovy runs on top of a JVM, and therefore benefits of all its advantages such as automatic garbage collection (although this is not an exclusivity of java) and portability (ditto), and can tap into the uncountable existing JAVA libraries (ditto).
groovy is used in Documentum’s xPlore.
groovy is a powerful, yet easy to learn, programing language still actively maintained by the Apache foundation, cf. here. Similar to java but without its verbosity, it should instantly appeal to java developers doing Documentum administrative tasks.
groovy comes with groovysh, a comfortable and powerful interactive shell for trying out statements and experimenting with the language.
By importing the DfCs, we can use groovy to access Documentum repositories. Knowledge of the DfCs are required of course.
To install groovy, use the distribution’s package manager; e.g. on my Ubuntu, I’ve used:

sudo apt-get install groovy

while on Centos, the following command will do it:

sudo yum install groovy

To test it, let’s use the same program as for perl, but rewritten a la groovy:

#! /usr/bin/groovy

import System.*;
import java.io.*;

import com.documentum.fc.client.*;
import com.documentum.fc.common.*;

   static void main(String[] args) {
      docbroker = "dmtest";
      docbase = " dmtest";
      username = "dmadmin";
      password = "dmadmin";
      println("attempting to connect to " + docbase + " as " + username + "/" + password + " via docbroker " + docbroker);
      try {
         client = DfClient.getLocalClient();
         config = client.getClientConfig();
         config.setString ("primary_host", docbroker);
         logInfo = new DfLoginInfo();
         docbase_session = client.newSession(docbase, logInfo);
         if (docbase_session != null) {
            println("Got a session");
            // do something in the session;
            folderId = new DfId("0c00c35080000105");
            folder = docbase_session.getObject(folderId);
            attrList = "r_object_id,r_object_type,object_name";
            coll = folder.getContents(attrList);
            while (coll.next())
               System.out.printf("ObjectId=%-16s r_object_type=%-20s ObjectName=%s\n",
            println("Didn't get a session");
      catch (e) {
         println("Exception was: " + e);

Lines 6 & 7 import the DfC so don’t forget to add them to the CLASSPATH; normally they are because the execution environment is a Documentum client, e.g.:

export JAVA_HOME=/home/dmadmin/documentum/shared/java64/1.8.0_77
export CLASSPATH=/home/dmadmin/documentum/shared/dfc/dfc.jar
export PATH=$JAVA_HOME/bin:$PATH

Line 15 & 38 show that besides its own built-in functions, groovy can also use equivalent functions from the java libraries.
To invoke the script:

groovy tgroovy.gry
# or make it executable and call it:
chmod +x tgroovy.gry

Here is its output:

attempting to connect to dmtest as dmadmin/dmadmin via docbroker dmtest
Got a session
ObjectId=0800c3508000019b r_object_type=dm_job ObjectName=dm_PropagateClientRights
ObjectId=0800c3508000019c r_object_type=dm_job ObjectName=dm_PostUpgradeAction
ObjectId=0800c35080000408 r_object_type=dmc_wfsd_type_info ObjectName=integer
ObjectId=0800c35080000409 r_object_type=dmc_wfsd_type_info ObjectName=boolean
ObjectId=0800c3508000040a r_object_type=dmc_wfsd_type_info ObjectName=double
ObjectId=3a00c35080000227 r_object_type=dm_location ObjectName=replica_storage_01
ObjectId=3e00c35080000149 r_object_type=dm_mount_point ObjectName=share
ObjectId=6700c35080000100 r_object_type=dm_plugin ObjectName=CSEC Plugin
ObjectId=6700c35080000101 r_object_type=dm_plugin ObjectName=Snaplock Connector


jython is a python implementation written in java, see here.
A such, it offers most of the features of the powerful python language although it stays behind the latest python version (v2.5.3 vs. 3.7).
Like java, groovy, jruby, scala, etc …, jython runs inside a JVM and can use all the java libraries such as the DfCs and become a Documentum client with no changes except adding the DfCs to the $CLASSPATH.
jython appeals especially to people who already know python; like for groovy, a basic level knowledge of the DfCs is required.
To install jython, use your distribution’s package manager, e.g.

# for ubuntu:
sudo apt-get install jython

Make sure the DfCs are present in $CLASSPATH, otherwise add them:

export CLASSPATH=/home/dmadmin/documentum/shared/dfc/dfc.jar:

When runing the test script below, the DfCs complain about a bad date format:

20:42:05,482 ERROR [File Watcher] com.documentum.fc.common.DfPreferences - [DFC_PREFERENCE_BAD_VALUE] Bad value for preference "dfc.date_format", value="M/d/yyyy, h:mm:ss a"
com.documentum.fc.common.DfException: Illegal syntax found in the date format 'M/d/yyyy, h:mm:ss a'. The default localized short date format will be used.
at com.documentum.fc.common.DfException.newIllegalDateFormatException(DfException.java:109)

Evidently, they are unhappy with the default date format. The work-around is to force one in the dfc.properties file by adding the line below (choose a date format that best suits you):

dfc.date_format=dd.MM.yyyy HH:mm:ss

Since the error did not occur with groovy (nor in the provided JNI-enabled command-line tools such as iapi, idql and dmawk), it is not the DfCs per se that have this problem but the combination of java + jython + DfCs.
Here comes the test script:

#!/usr/bin/env jython

# install jython via your O/S package manager;
# export CLASSPATH=/home/dmadmin/documentum/shared/dfc/dfc.jar:$CLASSPATH
# edit the documentum/shared/config/dfc.properties and add a dfc.date property, e.g.:
# dfc.date_format=dd.MM.yyyy HH:mm:ss
# execute:
#   jython test.jy
# or:
#   chmod +x test.jy; ./test.jy
# can also be execute interactively as follows:
# start jython:
#    jython
# call the test script;
#    execfile("/home/dmadmin/test.jy")

import traceback
import com.documentum.fc.client as DFCClient
import com.documentum.fc.common as DFCCommon

docbroker = "dmtest"
docbase = " dmtest"
username = "dmadmin"
password = "dmadmin"
print("attempting to connect to " + docbase + " as " + username + "/" + password + " via docbroker " + docbroker)
  client = DFCClient.DfClient.getLocalClient()

  config = client.getClientConfig()
  config.setString ("primary_host", docbroker)

  logInfo = DFCCommon.DfLoginInfo()
  docbase_session = client.newSession(docbase, logInfo)

  if docbase_session is not None:
    print("Got a session")
    # do something in the session;
    folderId = DFCCommon.DfId("0c00c35080000105");
    folder = docbase_session.getObject(folderId);
    attrList = "r_object_id,r_object_type,object_name";
    coll = folder.getContents(attrList);
       print("ObjectId=" + "%-16s" % coll.getString("r_object_id") + 
             " r_object_type=" + "%-20s" % coll.getString("r_object_type") +
             " ObjectName=" + coll.getString("object_name"));
    print("Didn't get a session")
except Exception:


jython test.jy
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.python.core.PySystemState (file:/usr/share/java/jython-2.5.3.jar) to method java.io.Console.encoding()
WARNING: Please consider reporting this to the maintainers of org.python.core.PySystemState
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
attempting to connect to dmtest as dmadmin/dmadmin via docbroker dmtest
Got a session
ObjectId=0800c3508000019b r_object_type=dm_job ObjectName=dm_PropagateClientRights
ObjectId=0800c3508000019c r_object_type=dm_job ObjectName=dm_PostUpgradeAction
ObjectId=0800c35080000408 r_object_type=dmc_wfsd_type_info ObjectName=integer
ObjectId=0800c35080000409 r_object_type=dmc_wfsd_type_info ObjectName=boolean
ObjectId=0800c3508000040a r_object_type=dmc_wfsd_type_info ObjectName=double
ObjectId=3a00c35080000227 r_object_type=dm_location ObjectName=replica_storage_01
ObjectId=3e00c35080000149 r_object_type=dm_mount_point ObjectName=share
ObjectId=6700c35080000100 r_object_type=dm_plugin ObjectName=CSEC Plugin
ObjectId=6700c35080000101 r_object_type=dm_plugin ObjectName=Snaplock Connector

Ironically, the jython’s launcher is a perl script; it basically initializes java and python environment variables such as classpath, java options and jython home and path. If the initial WARNINGs are disruptive, edit that script and correct the problem or just redirect the stderr to null, e.g.:

jython test.jy 2> /dev/null

So, which one to choose ?

To summarize, the decision tree below may help choosing one scripting language among the preceding ones.

DfCs knowledge ?:
java proficiency ?:
choose groovy with the DfCs
else python proficiency ?:
choose jython with the DfCs
else select one of the following ones:
get acquainted with one of the above languages
| choose another JVM-based language
| give up the DfCs and use DQL/API with perl, python of gawk instead (see below)
else perl proficiency ?:
choose db::Documentum
else python proficiency ?:
choose python and DctmAPI.py
else nawk/gawk proficiency ?
choose gawk and dmgawk binding
else select one of the following:
learn one of the above scripting languages
| develop a Documentum binding for your preferred scripting language not in the list
| hire dbi-services for your administrative tasks or projects ;-)

DfCs are clearly indicated to java programmers. They are still supported and new features are always accessible from them. There are tasks which cannot be done through the API or DQL and only through the DfCs, but generally those are out of the scope of an administrator. Note that even the non java and DfCs languages still finish up invoking the DfCs in the background because they are linked with the libdmcl.so library and that one makes JNI behind-the-scene calls to the DfCs for them, thus hiding their complexity. Hopefully, this shared library will stay with us still for some time otherwise our scripting language choice will be seriously restricted to JVM-based languages and the DfCs.

Cet article A few scripting languages for Documentum est apparu en premier sur Blog dbi services.

See You At The Edge Conference in Austin

Anthony Shorten - Thu, 2019-02-14 16:31
I will be attending the Oracle Utilities Edge Conference in Austin TX in March. This year the agenda is slightly different to past years with the Technical Sessions being part of the Cloud and Technology Track with the Cloud sessions so I will have lots more speakers this year. I will be running a few sessions around our next generation Testing solution, migration to the cloud, a deep dive into our futures as well as co-chairing an exciting session on our directions in Machine Learning in the Oracle Utilities stack.

Oracle Create Schema – multiple DDL statements in a single transaction

The Anti-Kyte - Thu, 2019-02-14 16:08

I was going to begin with some extended Brexit metaphor to illustrate the chaos and confusion that can ensue when you first encounter Oracle’s CREATE SCHEMA command.
Fortunately, the Dutch Government saved me the trouble :


Much as I’d like to believe that the Cookie Monster has finally overcome his Sesame Street type casting, I can’t help noticing that the Brexit Monster never seems to in the same room as this guy…


In Oracle, the term “schema” is used interchangeably with the term “user”. Creating a user in Oracle automatically creates a schema of the same name belonging to that user.
The process is so seamless that it’s almost impossible to tell where one begins and the other ends.
You may therefore be somewhat confused the first time you encounter Oracle’s CREATE SCHEMA command…

Contrary to what you might expect, CREATE SCHEMA does not enable you to create a database user. What it does do is to enable you to run multiple DDL statements inside a single transaction.
Now, if you thought that any DDL statement in Oracle would end with an implicit commit of the current transaction, well that makes two of us (at least).

To demonstrate why this is not necessarily true, let’s say we have a user created like this :

create user brexit_monster identified by lets_go_dutch 
    default tablespace users quota 5M on users

grant create session, create table, create view, create procedure to brexit_monster

…and a role created like this :

create role erg

If we want to create a table in the brexit_monster schema, then a view on that table, followed by a grant to the ERG role we could achieve this with three separate statements in a script…

create table brexit_types( brexit_cid varchar2(30),  classification varchar2(65000))

create or replace view promised_land as select brexit_cid from brexit_types where classification = 'HARD'

grant select on promised_land to erg

…however, if the first statement fails, the next two will also fail leaving something rather messy…


If instead, we were to wrap these statements into a single CREATE SCHEMA…

create schema authorization brexit_monster
    create table brexit_types( 
        brexit_cid varchar2(30),  
        classification varchar2(65000))
    create view promised_land as 
        select brexit_cid 
        from brexit_types 
        where classification = 'HARD'
    grant select on promised_land to erg

…the error stack is somewhat more manageable


Note however, that the View statement has changed a bit. It’s now a straight CREATE rather than CREATE OR REPLACE.
In fact, if you try to plug any DDL statement into CREATE SCHEMA that is not either a CREATE TABLE, CREATE VIEW or GRANT then you will get :

ORA-02422: missing or invalid schema element

If we issue the correct statement…

create schema authorization brexit_monster
    create table brexit_types( 
        brexit_cid varchar2(30),  
        classification varchar2(5))
    create view promised_land as 
        select brexit_cid 
        from brexit_types 
        where classification = 'HARD'
    grant select on promised_land to erg

Schema created.

…we can see that all of the DDL has been executed…

select object_name, object_type
from user_objects
order by 2

------------------------------ -----------------------

select grantee, privilege
from user_tab_privs
where owner = 'BREXIT_MONSTER'
and table_name = 'PROMISED_LAND'

------------------------------ ------------------------------
ERG			       SELECT

At this point you may still be somewhat skeptical about whether all this really happens in a single transaction.

I mean, how do we know that Oracle isn’t just parsing each statement to make sure they’re all valid and then trusting it won’t hit a runtime error ?
One way to find out is, of course, to engineer a runtime error.

You remember when I created the BREXIT_MONSTER user and you thought that I was a bit stingy with the tablespace quota allocation ? Well…

set serverout on
exec dbms_output.put_line('Current transaction = '||dbms_transaction.local_transaction_id(true));

create schema authorization brexit_monster
    create wto_terms(is_problem varchar2(3) default 'NO')
    create table little_objects as select * from all_objects fetch first 5 rows only
    create table my_objects as select * from all_objects
exec dbms_output.put_line('Current transaction='||dbms_transaction.local_transaction_id(true));

When we run this we get …


We can see that the local_transaction_id has changed. So the transaction that the CREATE SCHEMA was running in has ended. Question is – has it been commited or rolled back ?
Now to check if any of the tables have been created…

select table_name
from user_tables
where table_name in ('WTO_TERMS', 'LITTLE_OBJECTS')

no rows selected

We could go into great detail here and do some digging around in trace files.
Then again, there’s been quite enough procrastination around this whole Brexit business already.
As we can see, the functionality of CREATE SCHEMA is that it does execute multiple DDL statements in a single database transaction – i.e. all statements succeed or none do.
In other words, if it walks like a Brexit Monster and talks about sunlight uplands it’s probably Boris Johnson.

Access Table Without Synonym and Without Schema Name

Tom Kyte - Thu, 2019-02-14 16:06
I have a schema called HCR_SCHEMA, in which I have one table called TEST_EMP, and I have a user say USER_A and granted him select on the TEST_EMP table. I connected as USER_A as tried to execute SELECT * FROM TEST_EMP. Obviosuly it threw me error ...
Categories: DBA Blogs

print an XMLTYPE Record Type variable

Tom Kyte - Thu, 2019-02-14 16:06
we have a following definition TYPE response_t IS RECORD ( doc XMLTYPE ); A local variable is defined with that record type lv_resp response_t; I want to print the value this local variable has stored.I tried ...
Categories: DBA Blogs

Expdb views_as_tables failes after view recreated, bug or feature?

Tom Kyte - Thu, 2019-02-14 16:06
This has been giving me fits. I?m running with no maintenance. In the test case below I create a view TEST_VIEW with three columns A,B,C returning one row. Both select from the view and export of the view work as expected. I ...
Categories: DBA Blogs

Oracle ADF or Oracle JET - An Update

Shay Shmeltzer - Thu, 2019-02-14 14:36

Back in 2015 I wrote a blog entry to help our customers get a clear picture about the Oracle ADF vs Oracle JET "debate", over the years I kept pointing people to this writeup as a simple summary of what we are aiming to do and as a guideline for picking the technology you should use. However, the technology world is an ever changing place, and I believe that it is time to do a bit of an update of that entry to reflect the current status.

While the basic distinctions between the two technologies have not changed, specifically: 

  • ADF for Java developer, JET for JavaScript developer
  • ADF is server side framework, JET is client side focused
  • ADF is both front and back-end framework, JET is UI layer only

Some of the surrounding eco-system has changed and evolved.

Client-Centric Architecture Momentum

Over the past several years we saw an increase in the adoption of UI development that is focused around JavaScript-based client layer. This is something you can see for example in the results from the Stackoverflow yearly survey - back in 2016 JavaScript was used by 54% of developers and in 2018 the number stood at ~70%

Today, most of the new projects that we see being developed will pick up a JavaScript based UI layer as their solution for front end development. The adoption of this JS/HTML UI is not limited to web apps, many mobile apps also leverage the architecture to create mobile optimized UIs deployed on devices using either a hybrid or a Progressive Web Application (PWA) architecture. Development organizations like the fact that a client-centric JavaScript/HTML architecture allows them to use a single set of skills to support multi-channel apps. 

Back in 2015 many of the Oracle PaaS services were already leveraging this architecture along with Oracle JET - Oracle's Toolkit for building JavaScript based UIs. Today this trend started to extend to Oracle SaaS - Oracle's cloud apps offering. Several of the modules in Oracle SaaS have introduced UIs built with Oracle JET - leveraging its capabilities to create engaging and dynamic user interfaces. In the coming years you are going to see more and more modules offering new UIs based on Orale JET. This is a transition for Oracle's SaaS org from server-side generation UI to a client centric architecture. 

In parallel Oracle JET also added a bunch of capabilities targeting mobile specific scenarios including the ability to render the components to look like native controls on both iOS and Android, the ability to package apps as hybrid mobile apps, and support for offline usage patterns. All those capabilities are further pushing the usage of Oracle JET at Oracle as the UI layer for the various mobile apps we deliver.

One thing to keep in mind, since JET only provides the UI layer, there is still a need to build the backend of the application. This backend will handle data access and business logic - and it will then expose REST services that will be consumed by the JET UI. The existing backend layer for the Oracle SaaS apps was developed with Oracle ADF Business Components, and now they are able to leverage the investment they already did in that layer and expose these components through REST services. This way they get to keep the complex logic already implemented in that layer and just create a new modernized UI layer. 

If you track down the "what's new" document for the versions of ADF that we released since 2015 you'll see that enhancing the out-of-the-box functionality we provide around REST enablement has been a focus area for us. As a result of this effort, when you are exposing ADF BC as REST you get capabilities such as pagination, query by example, dynamic sorting, relationship management, and more built-into the services you expose.

This approach is something that ADF customers should consider if they decide to adopt the trend of new UIs being built with JavaScript based toolkit. Regardless of the UI framework that you'll choose, remember that the investment you made in the ADF backend can still be leveraged easily.

Better Tooling

One of the other difference I quoted between ADF and JET was the development experience. ADF was focused on providing a more visual and declarative approach to Java developers, while JET had you use code editors to manually write your code.

This is another thing that changed since 2015. Oracle now offers a development platform for JET applications that is focused on a more visual way of building the app - this is Oracle Visual Builder Cloud Service. If you haven't seen a demo before - take 10 minutes and watch this video as an example. As you can see, Visual Builder provides a WYSIWYG UI design, action flow diagrams for business logic, page flow diagrams, property inspectors and more. All this while still allowing you to directly access the code when needed for direct manipulation. 

Visual Development with VBCS

Image 1 - Visual Oracle JET Development in Oracle Visual Builder (Components, Structure Pane, Visual Layout Editor, Property Inspector)

With this environment you can become much more productive building UIs that leverage the power of Oracle JET. Note that Visual Builder is not just about simplifying the layout of your pages, it is also simplifying how you work with variables, binding and business logic - reducing the amount of coding you need to do.

Visual Builder also simplifies the learning curve for people who are new to JavaScript development - making the transition to this new technology smoother. Our SaaS teams have also decided to leverage VB to help them deliver new generations of UI faster.

For customers who already have ADF based applications, another appealing aspect of VB is its ability to understand the structure of ADF BC based REST services and the functionality they provide. Check out this demo to see how you can quickly use Visual Builder to build a JET based mobile app on top of ADF BC REST backend in a declarative way.

A little side note - beyond the ability of VB to create UIs, you can also use it to quickly create tables in a database and business objects (including business logic) on top of them that are exposed as REST services. The technology that drives that layer of VB is based on the same ADF Business Components - another thing that will be of course familiar to ADF developers if they decide to pick up Visual Builder.


So what should you do if you are an Oracle ADF customer today? As always the answer is "it depends" :-)

The first thing to think about is - "Does Oracle ADF answers my needs and does the application I built with it work as expected?" - assuming that the answer to both is positive, then you can continue as you were.

If, however, there are technical or architectural needs where your existing ADF app doesn't provide the answers you need (for example on the UI side or the mobile device support needs), looking at Oracle JET and Visual Builder is definitely something you should do. Also if you are about to embark on a new project, you might want to consider if a new client-centric architecture for the UI layer is something you can benefit from, and if you think it is - again try out JET and VB and see how you like them. In both cases remember that you can leverage your existing investments in the business objects layer and expose it as REST services to be consumed by new UIs. In fact you can have the two apps running in parallel.

JET and ADF side by side

Image 2- JET based architecture (MVVM) compared to ADF Architecture (MVC) - sharing business services

One more note, many developers like to keep their skills fresh and learn new things, if you are an Oracle ADF developer, I think that picking up Visual Builder and playing with it can get you on the road to learning new JavaScript skills quite quickly while still keeping you in a fun friendly visual cloud IDE - give it a try.


Categories: Development

Docker EE – UCP 3.1.2 installation failed in Azure

Yann Neuhaus - Thu, 2019-02-14 09:06

Docker EE 2.1 is a Containers-as-a-Service (CaaS) platform on-premise or in the cloud. To manage your cluster of containers, Docker EE allows you to install UCP (Universal Control Panel).

When you plan to install the UCP 3.1.2 in Azure, the installation fails during the deployment of the UCP service. You will get the following error message.

Screenshot 2019-02-14 at 15.44.13

[docker@docker-manager000 ~]$ docker logs ucp-reconcile

Screenshot 2019-02-12 at 08.28.37


If you google the error message you will get a lot of answers but almost all of them are not suitable to solve the issue. According to the Docker community, the current issue is related to UCP 3.1.0, 3.1.1, 3.1.2 in certain cases.

The best solution provided by the community is to install a previous version of UCP (3.0.7) and upgrade it from within the UCP dashboard.

Before installing another version of UCP, remove the current installation:

[docker@docker-manager000 ~]$ sudo docker swarm leave --force
[docker@docker-manager000 ~]$ sudo docker stop $(sudo docker ps -aq)
[docker@docker-manager000 ~]$ sudo docker rm $(sudo docker ps -aq) --force
[docker@docker-manager000 ~]$ sudo docker rmi $(sudo docker images -aq) --force
[docker@docker-manager000 ~]$ sudo docker network prune
[docker@docker-manager000 ~]$ sudo docker system prune --force

Re-try the installation with UCP version 3.0.7

[docker@docker-manager000 ~]$ docker container run --rm -it \
>   --name ucp \
>   -v /var/run/docker.sock:/var/run/docker.sock \
>   docker/ucp:3.0.7 install \
>   --host-address \
>   --pod-cidr \
>   --cloud-provider Azure \
>   --interactive


Unable to find image 'docker/ucp:3.0.7' locally

3.0.7: Pulling from docker/ucp

ff3a5c916c92: Pull complete

a52011fa0ead: Pull complete

dbfd538ef7b5: Pull complete

Digest: sha256:21219684bfafd78c49669865b769c775f132c5b0915a5ec5fd8b41d9a6f52597

Status: Downloaded newer image for docker/ucp:3.0.7

INFO[0000] Your engine version 18.09.2, build 1ac774d (3.10.0-862.11.6.el7.x86_64) is compatible with UCP 3.0.7 (a422a90)

Admin Username: test

Admin Password:

Confirm Admin Password:

INFO[0016] Pulling required images... (this may take a while)

INFO[0016] Pulling docker/ucp-cfssl:3.0.7

INFO[0017] Pulling docker/ucp-dsinfo:3.0.7

INFO[0036] Pulling docker/ucp-metrics:3.0.7

INFO[0039] Pulling docker/ucp-hyperkube:3.0.7

INFO[0046] Pulling docker/ucp-kube-compose:3.0.7

INFO[0074] Pulling docker/ucp-kube-dns-dnsmasq-nanny:3.0.7

INFO[0078] Pulling docker/ucp-controller:3.0.7

INFO[0084] Pulling docker/ucp-etcd:3.0.7

INFO[0085] Pulling docker/ucp-auth:3.0.7

INFO[0087] Pulling docker/ucp-calico-cni:3.0.7

INFO[0090] Pulling docker/ucp-kube-dns-sidecar:3.0.7

INFO[0092] Pulling docker/ucp-interlock-extension:3.0.7

INFO[0093] Pulling docker/ucp-auth-store:3.0.7

INFO[0095] Pulling docker/ucp-agent:3.0.7

INFO[0096] Pulling docker/ucp-compose:3.0.7

INFO[0097] Pulling docker/ucp-swarm:3.0.7

INFO[0099] Pulling docker/ucp-pause:3.0.7

INFO[0100] Pulling docker/ucp-calico-kube-controllers:3.0.7

INFO[0101] Pulling docker/ucp-calico-node:3.0.7

INFO[0110] Pulling docker/ucp-kube-dns:3.0.7

INFO[0135] Pulling docker/ucp-interlock:3.0.7

INFO[0135] Pulling docker/ucp-interlock-proxy:3.0.7

WARN[0139] None of the hostnames we'll be using in the UCP certificates [docker-manager000] contain a domain component.  Your generated certs may fail TLS validation unless you only use one of these shortnames or IPs to connect.  You can use the --san flag to add more aliases

You may enter additional aliases (SANs) now or press enter to proceed with the above list.

Additional aliases:

INFO[0000] Initializing a new swarm at

INFO[0020] Installing UCP with host address - If this is incorrect, please specify an alternative address with the '--host-address' flag

INFO[0020] Deploying UCP Service...

INFO[0120] Installation completed on docker-manager000 (node 5nhn0oqi9h8tjo99t15v3ckr5)

INFO[0120] UCP Instance ID: ccraf0c9vbrcnkxix11r7bdue

INFO[0120] UCP Server SSL: SHA-256 Fingerprint=9D:E3:7D:4C:12:03:B9:03:21:C6:5A:0C:87:C3:AA:A2:48:B0:C1:91:46:D4:12:A1:6F:05:64:C3:72:13:4B:6F

INFO[0120] Login to UCP at

INFO[0120] Username: test

INFO[0120] Password: (your admin password)

StackOverflow post : https://stackoverflow.com/questions/54189511/docker-unable-to-reconcile-state-of-kubernetes-addons-component-error-while-dep/54646555#54646555

Cet article Docker EE – UCP 3.1.2 installation failed in Azure est apparu en premier sur Blog dbi services.

Reintroducing Angelo Rosado

Steven Chan - Thu, 2019-02-14 08:35

Given Steven's recent departure from this blog, we are proactively engaging experts from all areas of Oracle E-Business Suite Technology to continue to bring you the latest news and announcements. 

Angelo Rosado is our resident expert on lifecycle and systems management. He has been a frequent contributor to this and our sister Oracle E-Business Suite and Oracle Cloud blog. I look forward to collaborating with Angelo to bring you the latest updates and innovations in Oracle E-Business Suite technology.

Here are a few of Angelo's recent blog posts:

Related Articles
Categories: APPS Blogs

Partner Webcast – Next-Gen Automated Systems Management for Oracle Exadata

Oracle Exadata Database Machine has been critical for the digital transformation of many companies. But in many cases Oracle Exadata customers have to use multiple tools for monitoring,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

How To Create A New Container Group Using Azure CLI

Yann Neuhaus - Thu, 2019-02-14 07:18
Steps To Create A New Container Group Using Azure CLI

Before starting, a MSDN account is needed to use the Azure platform, please refer to your manager for further information.

In this tutorial, the Azure CLI command-line tool will be install on Windows and we will deploy a website running the open source GHOST platform.

Choose the right operating system to install the command-line tool. The information can be found here.

When the installation is done, launch the command prompt and type az login to authenticate on the Azure platform (see below):
az login
Wait until a browser is launched so that you can login (see below):

You have logged into Microsoft Azure!

To create a new container, the following command will be used:

  • az group create
  • az container create
  • az container show
  • az container logs
  • az group delete

Open the command prompt and type powershell to start.

Define the variable resource group that will hold the container group:

  • $resourceGroup = “TutoGhost” (hit enter)

Define the variable location:

  • $location = “westeurope” (hit enter)

Create the resource group and the location:

  • az group create -n $resourceGroup -l $location (hit enter)

The resource group and the location are now created.

Define the variable container group (only lowercase letters, numbers and dashes):

  • $containerGroupName = “site1-ghost” (hit enter)

Create the container group:

  • az container create -g $resourceGroup -n $containerGroupName `
  • –image ghost `
  • –ports 2368 `
  • –ip-address public `
  • –dns-name-label testtutoghost

The dns-name-label should be unique.

Hit enter to execute the commands.

During the execution process, use the following command-line to check the status of the container group:

  • az container show `
  • -g $resourceGroup -n $containerGroupName

The provisioningState variable display the three state of creation of the container group which is pending, creating and succeeded.

A public random ip address is assigned and a fully qualified domain name which is testtutoghost.westeurope.azurecontainer.io

To check if the container is up and running, open a browser and type the FQDN followed by colon 2368.

To check the container group logs, type the following command:

  • az container logs `
  • -n $containerGroupName -g $resourceGroup

As a test environment and to avoid paying extra cost, make sure to delete the resource group (command-line below) created if not used.

  • az group delete -n $resourceGroup -y

Cet article How To Create A New Container Group Using Azure CLI est apparu en premier sur Blog dbi services.


Subscribe to Oracle FAQ aggregator