Feed aggregator

Oracle VM Server: Working with ovm cli

Dietrich Schroff - Fri, 2019-04-19 06:01
After getting the ovmcli run, here some commands which are quite helpful, when you are working with Oracle VM server.
But first:
Starting the ovmcli is done via
ssh admin@localhost -p 10000
at the OVM Manager.

After that you can get some overviews:
OVM> list server
Command: list server
Status: Success
Time: 2019-01-25 06:56:55,065 EST
  id:18:e2:a6:9d:5c:b6:48:3a:9b:d2:b0:0f:56:7e:ab:e9  name:oraclevm
OVM> list vm
Command: list vm
Status: Success
Time: 2019-01-25 06:56:57,357 EST
  id:0004fb0000060000fa3b1b883e717582  name:myAlpineLinux
OVM> list ServerPool
Command: list ServerPool
Status: Success
Time: 2019-01-25 06:57:12,165 EST
  id:0004fb0000020000fca85278d951ce27  name:MyServerPool
A complete list of all list commands can be obtained like this:
OVM> list ?
An overview which kind of command can be used like list:
OVM> help
For Most Object Types:
    create [(attribute1)="value1"] ... [on ]
    edit   (attribute1)="value1" ...
For Most Object Types with Children:
    add to
    remove from
Client Session Commands:
    set alphabetizeAttributes=[Yes|No]
    set commandMode=[Asynchronous|Synchronous]
    set commandTimeout=[1-43200]
    set endLineChars=[CRLF,CR,LF]
    set outputMode=[Verbose,XML,Sparse]
Other Commands:
If you want to get you vm.cfg file, you can use the id from "list vm" and type:
OVM> getVmCfgFileContent Vm id=0004fb0000060000fa3b1b883e717582
Command: getVmCfgFileContent Vm id=0004fb0000060000fa3b1b883e717582
Status: Success
Time: 2019-01-25 06:59:46,875 EST
  OVM_domain_type = xen_pvm
  bootargs =
  disk = [file:/OVS/Repositories/0004fb0000030000dad74d9c43176d2e/ISOs/0004fb0000150000226a713414eaa501.iso,xvda:cdrom,r,file:/OVS/Repositories/0004fb0000030000dad74d9c43176d2e/VirtualDisks/0004fb0000120000f62a7bba83063840.img,xvdb,w]
  bootloader = /usr/bin/pygrub
  vcpus = 1
  memory = 512
  on_poweroff = destroy
  OVM_os_type = Other Linux
  on_crash = restart
  cpu_weight = 27500
  OVM_description =
  cpu_cap = 0
  on_reboot = restart
  OVM_simple_name = myAlpineLinux
  name = 0004fb0000060000fa3b1b883e717582
  maxvcpus = 1
  vfb = [type=vnc,vncunused=1,vnclisten=,keymap=en-us]
  uuid = 0004fb00-0006-0000-fa3b-1b883e717582
  guest_os_type = linux
  OVM_cpu_compat_group =
  OVM_high_availability = false
  vif = []
Very helpful is the Oracle documentation (here).

Creating A Microservice With Micronaut, GORM And Oracle ATP

OTN TechBlog - Thu, 2019-04-18 12:56

Over the past year, the Micronaut framework has become extremely popular. And for good reason, too. It's a pretty revolutionary framework for the JVM world that uses compile time dependency injection and AOP that does not use any reflection. That means huge gains for your startup and runtime performance and memory consumption. But it's not enough to just be performant, a framework has to be easy to use and well documented. The good news is, Micronaut is both of these. And it's fun to use and works great with Groovy, Kotlin and GraalVM. In addition, the people behind Micronaut understand the direction that the industry is heading and have built the framework with that direction in mind. This means that things like Serverless and Cloud deployments are easy and there are features that provide direct support for them.  

In this post we'll look at how to create a Microservice with Micronaut which will expose a "Person" API. The service will utilize GORM which is a "data access toolkit" - a fancy way of saying it's a really easy way to work with databases (from traditional RDBMS to MongoDB, Neo4J and more). Specifically, we'll utilize GORM for Hibernate to interact with an Oracle Autonomous Transaction Processing DB. Here's what we'll be doing:

  1. Create the Micronaut application with Groovy support
  2. Configure the application to use GORM connected to an ATP database.
  3. Create a Person model
  4. Create a Person service to perform CRUD operations on the Person model
  5. Create a controller to interact with the Person service

First things first, make sure you have an Oracle ATP instance up and running. Luckily, that's really easy to do and this post by my boss Gerald Venzl will show you how to set up an ATP instance in less than 5 minutes. Once you have a running instance, grab a copy of your Client Credentials "Wallet" and unzip it somewhere on your local system.

Before we move on to the next step, create a new schema in your ATP instance and create a single table using the following DDL:

You're now ready to move on to the next step, creating the Micronaut application.

Create The Micronaut Application

If you've never used it before, you'll need to install Micronaut which includes a helpful CLI for scaffolding certain elements like the application itself and controllers, etc as you work with your application. Once you've confirmed the install, run the following command to generate your basic application:

Take a look inside that directory to see what the CLI has generated for you. 

As you can see, the CLI has generated a Gradle build script, a Dockerfile and some other config files as well as a `src` directory. That directory looks like this:

At this point you can import the application into your favorite IDE, so do that now. The next step is to generate a controller:

We'll make one small adjustment to the generated controller, so open it up and add the `@CompileStatic` annotation to the controller. It should like so once you're done:

Now run the application using `gradle run` (we can also use the Gradle wrapper with `./gradlew run`) and our application will start up and be available via the browser or a simple curl command to confirm that it's working.  You'll see the following in your console once the app is ready to go:

Give it a shot:

We aren't returning any content, but we can see the '200 OK' which means the application received the request and returned the appropriate response.

To make things easier for development and testing the app locally I like to create a custom Run/Debug configuration in my IDE (IntelliJ IDEA) and point it at a custom Gradle task. We'll need to pass in some System properties eventually, and this enables us to do that when launching from the IDE. Create a new task in `build.gradle` named `myTask` that looks like so:

Now create a custom Run/Debug configuration that points at this task and add the VM options that we'll need later on for the Oracle DB connection:

Here are the properties we'll need to populate for easier copy/pasting:

Let's move to the next step and get the application ready to talk to ATP!

Configure The Application For GORM and ATP

Before we can configure the application we need to make sure we have the Oracle JDBC drivers available. Download them, create a directory called `libs` in the root of your application and place them there.  Make sure that you have the following JARs in the `libs` directory:

Modify your `dependencies` block in your `build.gradle` file so that the Oracle JDB JARs and the `micronaut-hibernate-gorm` artifacts are included as dependencies:

Now let's modify the file located at `src/main/resources/application.yml` to configure the datasource and Hibernate.  

Our app is now ready to talk to ATP via GORM, so it's time to create a service, model and some controller methods! We'll start with the model.

Creating A Model

GORM models are super easy to work with.  They're just POGO's (Plain Old Groovy Objects) with some special annotations that help identify them as model entities and provide validation via the Bean Validation API. Let's create our `Person` model object by adding a Groovy class called 'Person.groovy' in a new directory called `model`.  Populate the model as such:

Take note of a few items here. We've annotated the class with @Entity (`grails.gorm.annotation.Entity`) so GORM knows that this is an entity it needs to manage. Our model has 3 properties: firstName, lastName and isCool. If you look back at the DDL we used to create the `person` table above you'll notice that we have two additional columns that aren't addressed in the model: ID and version. The ID column is implicit with a GORM entity and the version column is auto-managed by GORM to handle optimistic locking on entities. You'll also notice a few annotations on the properties which are used for data validation as we'll see later on.

We can start the application up again at this point and we'll see that GORM has identified our entity and Micronaut has configured the application for Hibernate:

Let's move on to creating a service.

Creating A Service

I'm not going to lie to you. If you're waiting for things to get difficult here, you're going to be disappointed. Creating the service that we're going to use to manage `Person` CRUD operations is really easy to do. Create a Groovy class called `PersonService` in a new directory called `service` and populate it with the following:

That's literally all it takes. This service is now ready to handle operations from our controller. GORM is smart enough to take the method signatures that we've provided here and implement the methods. The nice thing about using an abstract class approach (as opposed to using the interface approach) is that we can manually implement the methods ourselves if we have additional business logic that requires us to do so.

There's no need to restart the application here, as we've made no changes that would be visible at this point. We're going to need to modify our controller for that, so let's create one!

Creating A Controller

Lets modify the `PersonController` that we created earlier to give us some endpoints that we can use to do some persistence operations. First, we'll need to inject our PersonService into the controller.  This too is straightforward by simply including the following just inside our class declaration:

The first step in our controller should be a method to save a `Person`.  Let's add a method annotated with `@Post` to handle this and within the method we'll call the `PersonService.save()` method.  If things go well, we'll return the newly created `Person`, if not we'll return a list of validation errors. Note that Micronaut will bind the body of the HTTP request to the `person` argument of the controller method meaning that inside the method we'll have a fully populated `Person` bean to work with.

If we start up the application we are now able to persist a `Person` via the `/person/save` endpoint:

Note that we've received a 200 OK response here with an object containing our `Person`.  However, if we tried the operation with some invalid data, we'd receive some errors back:

Since our model (very strangely) indicated that the `Person` firstName must be between 5 and 50 characters we receive a 422 Unprocessable Entity response that contains an array of validation errors back with this response.

Now we'll add a `/list` endpoint that users can hit to list all of the Person objects stored in the ATP instance. We'll set it up with two optional parameters that can be used for pagination.

Remember that our `PersonService` had two signatures for the `findAll` method - one that accepted no parameters and another that accepted a `Map`.  The Map signature can be used to pass additional parameters like those used for pagination.  So calling `/person/list` without any parameters will give us all `Person` objects:

Or we can get a subset via the pagination params like so:

We can also add a `/person/get` endpoint to get a `Person` by ID:

And a `/person/delete` endpoint to delete a `Person`:


We've seen here that Micronaut is a simple but powerful way to create performant Microservice applications and that data persistence via Hibernate/GORM is easy to accomplish when using an Oracle ATP backend.  Your feedback is very important to me so please feel free to comment below or interact with me on Twitter (@recursivecodes).

If you'd like to take a look at this entire application you can view it or clone via Github.

Oracle ACEs at APEX Connect 2019, May 7-9 in Bonn

OTN TechBlog - Thu, 2019-04-18 11:36

APEX Connect 2019, the annual conference organized by DOAG (the German Oracle Applications User Group) will be held May 7-9, 2019 in Bonn, Germany. The event features a wide selection of sessions and events, covering APEX, PL and PL/SQL, and JavaScript.  Among the session speakers are the following members of the Oracle ACE Program:

Oracle ACE Director Nils de BruijinNiels de Bruijn
Business Unit Manager APEX, MT AG
Cologne, Germany




Oracle ACE Director Roel HartmanRoel Hartman
Director/Senior APEX Developer, APEX Consulting
Apeldoorn, Netherlands



Oracle ACE Director Heli HelskyahoHeli Helskyaho
CEO, Miracle Finland Oy




Oracle ACE Director John Edward ScottJohn Edward Scott
Founder, APEX Evangelists
West Yorkshire, United Kingdom



Oracle ACE Director Kamil StawiarskiKamil Stawiarski
Owner/Partner, ORA-600
Warsaw, Poland



Oracle ACE Director Martin WidlakeMartin Widlake
Database Architect and Performance Specialist, ORA600
Essex, United Kingdom



Oracle ACE Alan ArentsenAlan Arentsen
Senior Oracle Developer, Arentsen Database Consultancy
Breda, Netherlands



Oracle ACE Tobias ArnholdTobias Arnhold
Freelance APEX Developer, Tobias Arnhold IT Consulting



Oracle ACE Dietmar AustDietmar Aust
Owner, OPAL UG
Cologne, Germany



Oracle ACE Kai DonatoKai Donato
Senior Consultant for Oracle APEX Development, MT AG
Cologne, Germany



Oracle ACE Daniel HochleitnerDaniel Hochleitner
Freelance Oracle APEX Developer and Consultant
Regensburg, Germany



Oracle ACE Oliver LemmOliver Lemm
Business Unit Manager, MT AG
Cologne, Germany



Oracle ACE Richard MartensRichard Martens
Co-Owner, SMART4Solutions B.V.
Tilburg, Netherlands



Oracle ACE Robert MarzRobert Marz
Principal Technical Architect, its-people GmbH
Frankfurt, Germany



Oracle ACE Matt MulvaneyMatt Mulvaney
Senior Development Consultant, Explorer UK LTD
Leeds, United Kingdom



Oracle ACE Christian RokittaChristian Rokitta
Managing Partner, iAdvise
Breda, Netherlands



Oracle ACE Phillip SalvisbergPhilipp Salvisberg
Senior Principal Consultant, Trivadis AG
Zürich, Switzerland



Oracle ACE Sven-Uwe WellerSven-Uwe Weller
Syntegris Information Solutions GmbH



Oracle ACE Associate Carolin HagemannCarolin Hagemann
Hagemann IT Consulting
Hamburg, Germany



Oracle ACE Associate Moritz KleinMoritz Klein
Senior APEX Consultant, MT AG
Frankfurt, Germany


Additional Resources

Migrating Oracle Database & Non Oracle Database to Oracle Cloud

You can directly move / migrate various source databases into different target cloud deployments running the Oracle Cloud. Oracle automated tools for migration will move on premise database to the...

We share our skills to maximize your revenue!
Categories: DBA Blogs

CubeViewer - Process to Build the Cube Viewer

Anthony Shorten - Wed, 2019-04-17 18:32

As pointed out in the last post, the Cube Viewer is a new way of displaying data for advanced analysis. The Cube Viewer functionality extends the existing ConfigTools (a.k.a Task Optimization) objects to allow the analysis to be defined as a Cube Type and Cube View. Those definitions are used by the widget to display correctly and define what level of interactivity the user can enjoy.

Note: Cube Viewer is available in Oracle Utilities Application Framework V4. and above.

The process of building a cube introduces new concepts and new objects to ConfigTools to allow for an efficient method of defining the analysis and interactivity. In summary form the process is described by the figure below:

Cube View Process

  • Design Your Cube. Decide the data and related information to to be used in the Cube Viewer for analysis. This is not just a typical list of values but a design of dimensions, filters and values. This is an important step as it helps determine whether the Cube Viewer is appropriate for the data to be analyzed.
  • Design Cube SQL. Translating the design into a Cube based SQL. This SQL statement is formatted specifically for use in a cube.
  • Setup Query Zone. The SQL statement designed in the last step needs to be defined in a ConfigTools Query Zone for use in the Cube Type later in the process. This also allows for the configuration of additional information not contained in the SQL to be added to the Cube.
  • Setup Business Service. The Cube Viewer requires a Business Service based upon the standard FWLZDEXP application service. This is also used by the Cube Type later process.
  • Setup Cube Type. Define a Cube Type object defining the Query Zone, Business Service and other settings to be used by the Cube Viewer at runtime. This  brings all the configuration together into a new ConfigTools object.
  • Setup Cube View. Define an instance of the Cube Type with the relevant predefined settings for use in the user interface as a Cube View object. Appropriate users can use this as the initial view into the cube and use it as a basis for any Saved Views they want to implement.

Over the next few weeks, a number of articles will be available to outline each of these steps to help you understand the feature and be on your way to building your own cubes.

Oracle Database 19c download

Dietrich Schroff - Wed, 2019-04-17 15:20
In january 2019 Oracle released the documentation for Oracle Database 19c.

More than 7 weeks later there is still nothing at https://www.oracle.com/downloads/:

The gap between release date of the documentation and the on premises software was for 18c not so long...

Will 19c on premises software be released before may? Or later in summer?

ORA-22835: Buffer too small for CLOB to CHAR or BLOB to RAW conversion (actual: 6843, maximum: 2000)

Tom Kyte - Wed, 2019-04-17 13:26
HI iam using below query to read xml from blob and find the string but facing error buffer to small ora-22835 blob to raw conversion (actual 15569,mximum 2000) please help me out with below example <code> SELECT XMLTYPE (UTL_RAW.cast_to_varchar...
Categories: DBA Blogs

Merge two rows into one row

Tom Kyte - Wed, 2019-04-17 13:26
Hi Tom, I seek your help on how to compare two rows in the table and if they are same merge the rows. <code>create table test(id number, start_date date, end_date date, col1 varchar2(10), col2 varchar2(10), col3 varchar2(10)); insert into t...
Categories: DBA Blogs

Bringing up an OpenShift playground in AWS

Yann Neuhaus - Wed, 2019-04-17 13:24

Before we begin: This is in no way production ready, as the title states. In a production setup you would put the internal registry on a persistent storage, you would probably have more than one master node and you would probably have more than on compute node. Security is not covered at all here. This post is intended to quickly bring up something you can play with, that’s it. In future posts will explore more details of OpenShift. So, lets start.

What I used as a starting point are three t2.xlarge instances:

One of them will be the master, there will be one infrastructure and one compute node. All of them are based on the Red Hat Enterprise Linux 7.5 (HVM) AMI:

Once these three instances are running the most important thing is that you set persistent hostnames (if you do not do this the OpenShift installation will fail):

[root@master ec2-user]$ hostnamectl set-hostname --static master.it.dbi-services.com
[root@master ec2-user]$ echo "preserve_hostname: true" >> /etc/cloud/cloud.cfg

Of course you need to do that on all three hosts. Once that is done, because I have no DNS in my setup, /etc/hosts should be adjusted on all the machines, in my case:

[root@master ec2-user]$ cat /etc/hosts   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6  master master.it.dbi-services.com  node1 node1.it.dbi-services.com   node2 node2.it.dbi-services.com

As everything is based on RedHat you need to register all the machines:

[root@master ec2-user]$ subscription-manager register
Registering to: subscription.rhsm.redhat.com:443/subscription
Username: xxxxxx
The system has been registered with ID: xxxxxxx
The registered system name is: master

Once done, refresh and then list the available subscriptions. There should be at least one which is named like “Red Hat OpenShift”. Having identified the “Pool ID” for that one attach it (on all machines):

[root@master ec2-user]$ subscription-manager refresh
[root@master ec2-user]$ subscription-manager list --available
[root@master ec2-user]$ subscription-manager attach --pool=xxxxxxxxxxxxxxxxxxxxxxxxx

Now you are ready to enable the required repositories (on all machines):

[root@master ec2-user]$ subscription-manager repos --enable="rhel-7-server-rpms" \
    --enable="rhel-7-server-extras-rpms" \
     --enable="rhel-7-server-ose-3.11-rpms" \

Repository 'rhel-7-server-rpms' is enabled for this system.
Repository 'rhel-7-server-extras-rpms' is enabled for this system.
Repository 'rhel-7-server-ansible-2.6-rpms' is enabled for this system.
Repository 'rhel-7-server-ose-3.11-rpms' is enabled for this system.

Having the repos enabled the required packages can be installed (on all machines):

[root@master ec2-user]$ yum -y install wget git net-tools bind-utils iptables-services bridge-utils bash-completion kexec-tools sos psacct

Updating all packages to the latest release and rebooting to the potentially new kernel is recommended. As we will be using Docker for this deployment we will install that as well (on all machines):

[root@master ec2-user]$ yum install -y docker
[root@master ec2-user]$ yum update -y
[root@master ec2-user]$ systemctl reboot

Now, that we are up to date and the prerequisites are met we create a new group and a new user. Why that? The complete OpenShift installation is driven by Ansible. You could run all of the installation directly as root, but a better way is to use a dedicated user that has sudo permissions to perform the tasks (on all machines):

[root@master ec2-user]$ useradd -g dbi dbi
[root@master ec2-user]$ useradd -g dbi dbi

As Ansible needs to login to all the machines you will need to setup password-less ssh connections for the user. I am assuming that you know how to do that. If not, please check here.

Several tasks of the OpenShift Ansible playbooks need to be executed as root so the “dbi” user needs permissions to do that (on all machines):

[root@master ec2-user]$ cat /etc/sudoers | grep dbi

There is one last preparation step to be executed on the master only: Installing the Ansible playbooks required to bring up OpenShift:

[root@master ec2-user]$ yum -y install openshift-ansible

That’s all the preparation required for this playground setup. As all the installation is Ansible based we need an inventory file on the master:

[dbi@master ~]$ id -a
uid=1001(dbi) gid=1001(dbi) groups=1001(dbi),994(dockerroot) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
[dbi@master ~]$ pwd
[dbi@master ~]$ cat inventory 
# Create an OSEv3 group that contains the masters, nodes, and etcd groups

# Set variables common for all OSEv3 hosts
# SSH user, this user should allow ssh based auth without requiring a password
# If ansible_ssh_user is not root, ansible_become must be set to true
become_method = sudo
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
openshift_master_htpasswd_users={'admin': '$apr1$4ZbKL26l$3eKL/6AQM8O94lRwTAu611', 'developer': '$apr1$4ZbKL26l$3eKL/6AQM8O94lRwTAu611'}
# Registry settings
# disable checks


# host group for masters

# host group for etcd

# host group for nodes, includes region info
master.it.dbi-services.com openshift_node_group_name='node-config-master'
node1.it.dbi-services.com openshift_node_group_name='node-config-compute'
node2.it.dbi-services.com openshift_node_group_name='node-config-infra'

If you need more details about all the variables and host groups used here, please check the OpenShift documentation.

In any case pleas execute the prerequisites playbook before starting with the installation. When that does not run until the end or does show any “failed” tasks then you need to fix something before proceeding:

[dbi@master ~]$ ansible-playbook -i inventory /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml 

PLAY [Fail openshift_kubelet_name_override for new hosts] **********************************************

TASK [Gathering Facts] *********************************************************************************
ok: [master.it.dbi-services.com]
ok: [node1.it.dbi-services.com]


PLAY RECAP *********************************************************************************************
localhost                  : ok=11   changed=0    unreachable=0    failed=0   
master.it.dbi-services.com : ok=80   changed=17   unreachable=0    failed=0   
node1.it.dbi-services.com  : ok=56   changed=16   unreachable=0    failed=0   

INSTALLER STATUS ***************************************************************************************
Initialization  : Complete (0:01:40)

When it is fine, install OpenShift:

[dbi@master ~]$ ansible-playbook -i inventory /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml 

That will take some time but at the end your OpenShift cluster should be up and running:

[dbi@master ~]$ oc login -u system:admin
Logged into "https://master:8443" as "system:admin" using existing credentials.

You have access to the following projects and can switch between them with 'oc project ':

  * default

Using project "default".

[dbi@master ~]$ oc get nodes 
NAME                         STATUS    ROLES     AGE       VERSION
master.it.dbi-services.com   Ready     master    1h        v1.11.0+d4cacc0
node1.it.dbi-services.com    Ready     compute   1h        v1.11.0+d4cacc0
node2.it.dbi-services.com    Ready     infra     1h        v1.11.0+d4cacc0

As expected there is one master, one infratructure and one compute node. All the pods in the default namespace should be running fine:

[dbi@master ~]$ oc get pods -n default
NAME                       READY     STATUS    RESTARTS   AGE
docker-registry-1-lmjzs    1/1       Running   0          1h
registry-console-1-n4z5j   1/1       Running   0          1h
router-1-5wl27             1/1       Running   0          1h

All the default Image Streams are there as well:

[dbi@master ~]$ oc get is -n openshift
NAME                                           DOCKER REPO                                                                               TAGS                          UPDATED
apicurito-ui                                   docker-registry.default.svc:5000/openshift/apicurito-ui                                   1.2                           2 hours ago
dotnet                                         docker-registry.default.svc:5000/openshift/dotnet                                         latest,1.0,1.1 + 3 more...    2 hours ago
dotnet-runtime                                 docker-registry.default.svc:5000/openshift/dotnet-runtime                                 2.2,latest,2.0 + 1 more...    2 hours ago
eap-cd-openshift                               docker-registry.default.svc:5000/openshift/eap-cd-openshift                               14.0,15.0,13 + 6 more...      2 hours ago
fis-java-openshift                             docker-registry.default.svc:5000/openshift/fis-java-openshift                             1.0,2.0                       2 hours ago
fis-karaf-openshift                            docker-registry.default.svc:5000/openshift/fis-karaf-openshift                            1.0,2.0                       2 hours ago
fuse-apicurito-generator                       docker-registry.default.svc:5000/openshift/fuse-apicurito-generator                       1.2                           2 hours ago
fuse7-console                                  docker-registry.default.svc:5000/openshift/fuse7-console                                  1.0,1.1,1.2                   2 hours ago
fuse7-eap-openshift                            docker-registry.default.svc:5000/openshift/fuse7-eap-openshift                            1.0,1.1,1.2                   2 hours ago
fuse7-java-openshift                           docker-registry.default.svc:5000/openshift/fuse7-java-openshift                           1.0,1.1,1.2                   2 hours ago
fuse7-karaf-openshift                          docker-registry.default.svc:5000/openshift/fuse7-karaf-openshift                          1.0,1.1,1.2                   2 hours ago
httpd                                          docker-registry.default.svc:5000/openshift/httpd                                          2.4,latest                    2 hours ago
java                                           docker-registry.default.svc:5000/openshift/java                                           8,latest                      2 hours ago
jboss-amq-62                                   docker-registry.default.svc:5000/openshift/jboss-amq-62                                   1.3,1.4,1.5 + 4 more...       2 hours ago
jboss-amq-63                                   docker-registry.default.svc:5000/openshift/jboss-amq-63                                   1.0,1.1,1.2 + 1 more...       2 hours ago
jboss-datagrid73-openshift                     docker-registry.default.svc:5000/openshift/jboss-datagrid73-openshift                     1.0                           
jboss-datavirt63-driver-openshift              docker-registry.default.svc:5000/openshift/jboss-datavirt63-driver-openshift              1.0,1.1                       2 hours ago
jboss-datavirt63-openshift                     docker-registry.default.svc:5000/openshift/jboss-datavirt63-openshift                     1.0,1.1,1.2 + 2 more...       2 hours ago
jboss-decisionserver62-openshift               docker-registry.default.svc:5000/openshift/jboss-decisionserver62-openshift               1.2                           2 hours ago
jboss-decisionserver63-openshift               docker-registry.default.svc:5000/openshift/jboss-decisionserver63-openshift               1.3,1.4                       2 hours ago
jboss-decisionserver64-openshift               docker-registry.default.svc:5000/openshift/jboss-decisionserver64-openshift               1.0,1.1,1.2 + 1 more...       2 hours ago
jboss-eap64-openshift                          docker-registry.default.svc:5000/openshift/jboss-eap64-openshift                          1.7,1.3,1.4 + 6 more...       2 hours ago
jboss-eap70-openshift                          docker-registry.default.svc:5000/openshift/jboss-eap70-openshift                          1.5,1.6,1.7 + 2 more...       2 hours ago
jboss-eap71-openshift                          docker-registry.default.svc:5000/openshift/jboss-eap71-openshift                          1.1,1.2,1.3 + 1 more...       2 hours ago
jboss-eap72-openshift                          docker-registry.default.svc:5000/openshift/jboss-eap72-openshift                          1.0,latest                    2 hours ago
jboss-fuse70-console                           docker-registry.default.svc:5000/openshift/jboss-fuse70-console                           1.0                           2 hours ago
jboss-fuse70-eap-openshift                     docker-registry.default.svc:5000/openshift/jboss-fuse70-eap-openshift                     1.0                           
jboss-fuse70-java-openshift                    docker-registry.default.svc:5000/openshift/jboss-fuse70-java-openshift                    1.0                           2 hours ago
jboss-fuse70-karaf-openshift                   docker-registry.default.svc:5000/openshift/jboss-fuse70-karaf-openshift                   1.0                           2 hours ago
jboss-processserver63-openshift                docker-registry.default.svc:5000/openshift/jboss-processserver63-openshift                1.3,1.4                       2 hours ago
jboss-processserver64-openshift                docker-registry.default.svc:5000/openshift/jboss-processserver64-openshift                1.2,1.3,1.0 + 1 more...       2 hours ago
jboss-webserver30-tomcat7-openshift            docker-registry.default.svc:5000/openshift/jboss-webserver30-tomcat7-openshift            1.1,1.2,1.3                   2 hours ago
jboss-webserver30-tomcat8-openshift            docker-registry.default.svc:5000/openshift/jboss-webserver30-tomcat8-openshift            1.2,1.3,1.1                   2 hours ago
jboss-webserver31-tomcat7-openshift            docker-registry.default.svc:5000/openshift/jboss-webserver31-tomcat7-openshift            1.0,1.1,1.2                   2 hours ago
jboss-webserver31-tomcat8-openshift            docker-registry.default.svc:5000/openshift/jboss-webserver31-tomcat8-openshift            1.0,1.1,1.2                   2 hours ago
jenkins                                        docker-registry.default.svc:5000/openshift/jenkins                                        2,latest,1                    2 hours ago
mariadb                                        docker-registry.default.svc:5000/openshift/mariadb                                        10.1,10.2,latest              2 hours ago
mongodb                                        docker-registry.default.svc:5000/openshift/mongodb                                        2.4,3.2,3.6 + 3 more...       2 hours ago
mysql                                          docker-registry.default.svc:5000/openshift/mysql                                          5.7,latest,5.6 + 1 more...    2 hours ago
nginx                                          docker-registry.default.svc:5000/openshift/nginx                                          1.8,latest,1.10 + 1 more...   2 hours ago
nodejs                                         docker-registry.default.svc:5000/openshift/nodejs                                         8-RHOAR,0.10,6 + 3 more...    2 hours ago
perl                                           docker-registry.default.svc:5000/openshift/perl                                           5.20,5.24,5.16 + 1 more...    2 hours ago
php                                            docker-registry.default.svc:5000/openshift/php                                            5.6,5.5,7.0 + 1 more...       2 hours ago
postgresql                                     docker-registry.default.svc:5000/openshift/postgresql                                     latest,10,9.2 + 3 more...     2 hours ago
python                                         docker-registry.default.svc:5000/openshift/python                                         2.7,3.3,3.4 + 3 more...       2 hours ago
redhat-openjdk18-openshift                     docker-registry.default.svc:5000/openshift/redhat-openjdk18-openshift                     1.0,1.1,1.2 + 2 more...       2 hours ago
redhat-sso70-openshift                         docker-registry.default.svc:5000/openshift/redhat-sso70-openshift                         1.3,1.4                       2 hours ago
redhat-sso71-openshift                         docker-registry.default.svc:5000/openshift/redhat-sso71-openshift                         1.1,1.2,1.3 + 1 more...       2 hours ago
redhat-sso72-openshift                         docker-registry.default.svc:5000/openshift/redhat-sso72-openshift                         1.0,1.1,1.2                   2 hours ago
redis                                          docker-registry.default.svc:5000/openshift/redis                                          3.2,latest                    2 hours ago
rhdm70-decisioncentral-openshift               docker-registry.default.svc:5000/openshift/rhdm70-decisioncentral-openshift               1.0,1.1                       2 hours ago
rhdm70-kieserver-openshift                     docker-registry.default.svc:5000/openshift/rhdm70-kieserver-openshift                     1.0,1.1                       2 hours ago
rhdm71-controller-openshift                    docker-registry.default.svc:5000/openshift/rhdm71-controller-openshift                    1.0,1.1                       2 hours ago
rhdm71-decisioncentral-indexing-openshift      docker-registry.default.svc:5000/openshift/rhdm71-decisioncentral-indexing-openshift      1.0,1.1                       2 hours ago
rhdm71-decisioncentral-openshift               docker-registry.default.svc:5000/openshift/rhdm71-decisioncentral-openshift               1.1,1.0                       2 hours ago
rhdm71-kieserver-openshift                     docker-registry.default.svc:5000/openshift/rhdm71-kieserver-openshift                     1.0,1.1                       2 hours ago
rhdm71-optaweb-employee-rostering-openshift    docker-registry.default.svc:5000/openshift/rhdm71-optaweb-employee-rostering-openshift    1.0,1.1                       2 hours ago
rhdm72-controller-openshift                    docker-registry.default.svc:5000/openshift/rhdm72-controller-openshift                    1.0,1.1                       2 hours ago
rhdm72-decisioncentral-indexing-openshift      docker-registry.default.svc:5000/openshift/rhdm72-decisioncentral-indexing-openshift      1.0,1.1                       2 hours ago
rhdm72-decisioncentral-openshift               docker-registry.default.svc:5000/openshift/rhdm72-decisioncentral-openshift               1.1,1.0                       2 hours ago
rhdm72-kieserver-openshift                     docker-registry.default.svc:5000/openshift/rhdm72-kieserver-openshift                     1.0,1.1                       2 hours ago
rhdm72-optaweb-employee-rostering-openshift    docker-registry.default.svc:5000/openshift/rhdm72-optaweb-employee-rostering-openshift    1.0,1.1                       2 hours ago
rhpam70-businesscentral-indexing-openshift     docker-registry.default.svc:5000/openshift/rhpam70-businesscentral-indexing-openshift     1.0,1.1,1.2                   2 hours ago
rhpam70-businesscentral-monitoring-openshift   docker-registry.default.svc:5000/openshift/rhpam70-businesscentral-monitoring-openshift   1.1,1.2,1.0                   2 hours ago
rhpam70-businesscentral-openshift              docker-registry.default.svc:5000/openshift/rhpam70-businesscentral-openshift              1.0,1.1,1.2                   2 hours ago
rhpam70-controller-openshift                   docker-registry.default.svc:5000/openshift/rhpam70-controller-openshift                   1.0,1.1,1.2                   2 hours ago
rhpam70-kieserver-openshift                    docker-registry.default.svc:5000/openshift/rhpam70-kieserver-openshift                    1.0,1.1,1.2                   2 hours ago
rhpam70-smartrouter-openshift                  docker-registry.default.svc:5000/openshift/rhpam70-smartrouter-openshift                  1.0,1.1,1.2                   2 hours ago
rhpam71-businesscentral-indexing-openshift     docker-registry.default.svc:5000/openshift/rhpam71-businesscentral-indexing-openshift     1.0,1.1                       2 hours ago
rhpam71-businesscentral-monitoring-openshift   docker-registry.default.svc:5000/openshift/rhpam71-businesscentral-monitoring-openshift   1.0,1.1                       2 hours ago
rhpam71-businesscentral-openshift              docker-registry.default.svc:5000/openshift/rhpam71-businesscentral-openshift              1.0,1.1                       2 hours ago
rhpam71-controller-openshift                   docker-registry.default.svc:5000/openshift/rhpam71-controller-openshift                   1.0,1.1                       2 hours ago
rhpam71-kieserver-openshift                    docker-registry.default.svc:5000/openshift/rhpam71-kieserver-openshift                    1.0,1.1                       2 hours ago
rhpam71-smartrouter-openshift                  docker-registry.default.svc:5000/openshift/rhpam71-smartrouter-openshift                  1.0,1.1                       2 hours ago
rhpam72-businesscentral-indexing-openshift     docker-registry.default.svc:5000/openshift/rhpam72-businesscentral-indexing-openshift     1.1,1.0                       2 hours ago
rhpam72-businesscentral-monitoring-openshift   docker-registry.default.svc:5000/openshift/rhpam72-businesscentral-monitoring-openshift   1.0,1.1                       2 hours ago
rhpam72-businesscentral-openshift              docker-registry.default.svc:5000/openshift/rhpam72-businesscentral-openshift              1.0,1.1                       2 hours ago
rhpam72-controller-openshift                   docker-registry.default.svc:5000/openshift/rhpam72-controller-openshift                   1.0,1.1                       2 hours ago
rhpam72-kieserver-openshift                    docker-registry.default.svc:5000/openshift/rhpam72-kieserver-openshift                    1.0,1.1                       2 hours ago
rhpam72-smartrouter-openshift                  docker-registry.default.svc:5000/openshift/rhpam72-smartrouter-openshift                  1.0,1.1                       2 hours ago
ruby                                           docker-registry.default.svc:5000/openshift/ruby                                           2.2,2.3,2.4 + 3 more...       2 hours ago

Happy playing …

Cet article Bringing up an OpenShift playground in AWS est apparu en premier sur Blog dbi services.

Example of coe_xfr_sql_profile force_match TRUE

Bobby Durrett's DBA Blog - Wed, 2019-04-17 10:57

Monday, I used the coe_xfr_sql_profile.sql script from Oracle Support’s SQLT scripts to resolve a performance issue. I had to set the parameter force_match to TRUE so that the SQL Profile I created would apply to all SQL statements with the same FORCE_MATCHING_SIGNATURE value.

I just finished going off the on-call rotation at 8 am Monday and around 4 pm on Monday a coworker came up to me with a performance problem. A PeopleSoft Financials job was running longer than it normally did. Since it had run for several hours, I got an AWR report of the last hour and looked at the SQL ordered by Elapsed Time section and found a number of similar INSERT statements with different SQL_IDs.

The inserts were the same except for certain constant values. So, I used my fmsstat2.sql script with ss.sql_id = ’60dp9r760ja88′ to get the FORCE_MATCHING_SIGNATURE value for these inserts. Here is the output:

FORCE_MATCHING_SIGNATURE SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------------------ ------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
     5442820596869317879 60dp9r760ja88         3334601 15-APR-19 PM                1         224414.511     224412.713         2.982                  0                      0                   .376             5785269                 40                   3707

Now that I had the FORCE_MATCHING_SIGNATURE value 5442820596869317879 I reran fmsstat2.sql with ss.FORCE_MATCHING_SIGNATURE = 5442820596869317879 instead of ss.sql_id = ’60dp9r760ja88′ and got all of the insert statements and their PLAN_HASH_VALUE values. I needed these to use coe_xfr_sql_profile.sql to generate a script to create a SQL Profile to force a better plan onto the insert statements. Here is the beginning of the output of the fmsstat2.sql script:

FORCE_MATCHING_SIGNATURE SQL_ID        PLAN_HASH_VALUE END_INTERVAL_TIME         EXECUTIONS_DELTA Elapsed Average ms CPU Average ms IO Average ms Cluster Average ms Application Average ms Concurrency Average ms Average buffer gets Average disk reads Average rows processed
------------------------ ------------- --------------- ------------------------- ---------------- ------------------ -------------- ------------- ------------------ ---------------------- ---------------------- ------------------- ------------------ ----------------------
     5442820596869317879 0yzz90wgcybuk      1314604389 14-APR-19 PM                1            558.798        558.258             0                  0                      0                      0               23571                  0                    812
     5442820596869317879 5a86b68g7714k      1314604389 14-APR-19 PM                1            571.158        571.158             0                  0                      0                      0               23245                  0                    681
     5442820596869317879 9u1a335s936z9      1314604389 14-APR-19 PM                1            536.886        536.886             0                  0                      0                      0               21851                  0                      2
     5442820596869317879 a922w6t6nt6ry      1314604389 14-APR-19 PM                1            607.943        607.943             0                  0                      0                      0               25948                  0                   1914
     5442820596869317879 d5cca46bzhdk3      1314604389 14-APR-19 PM                1            606.268         598.11             0                  0                      0                      0               25848                  0                   1763
     5442820596869317879 gwv75p0fyf9ys      1314604389 14-APR-19 PM                1            598.806        598.393             0                  0                      0                      0               24981                  0                   1525
     5442820596869317879 0u2rzwd08859s         3334601 15-APR-19 AM                1          18534.037      18531.635             0                  0                      0                      0              713757                  0                     59
     5442820596869317879 1spgv2h2sb8n5         3334601 15-APR-19 AM                1          30627.533      30627.533          .546                  0                      0                      0             1022484                 27                    487
     5442820596869317879 252dsf173mvc4         3334601 15-APR-19 AM                1          47872.361      47869.859          .085                  0                      0                      0             1457614                  2                    476
     5442820596869317879 25bw3269yx938         3334601 15-APR-19 AM                1         107915.183     107912.459         1.114                  0                      0                      0             2996363                 26                   2442
     5442820596869317879 2ktg1dvz8rndw         3334601 15-APR-19 AM                1          62178.512      62178.512          .077                  0                      0                      0             1789536                  3                   1111
     5442820596869317879 4500kk2dtkadn         3334601 15-APR-19 AM                1         106586.665     106586.665         7.624                  0                      0                      0             2894719                 20                   1660
     5442820596869317879 4jmj30ym5rrum         3334601 15-APR-19 AM                1          17638.067      17638.067             0                  0                      0                      0              699273                  0                    102
     5442820596869317879 657tp4jd07qn2         3334601 15-APR-19 AM                1          118948.54      118890.57             0                  0                      0                      0             3257090                  0                   2515
     5442820596869317879 6gpwwnbmch1nq         3334601 15-APR-19 AM                0          48685.816      48685.816          .487                  0                      0                  1.111             1433923                 12                      0
     5442820596869317879 6k1q5byga902a         3334601 15-APR-19 AM                1            2144.59        2144.59             0                  0                      0                      0              307369                  0                      2

The first few lines show the good plan that these inserts ran on earlier runs. The good plan has PLAN_HASH_VALUE 1314604389 and runs in about 600 milliseconds. The bad plan has PLAN_HASH_VALUE 3334601 and runs in 100 or so seconds. I took a look at the plans before doing the SQL Profile but did not really dig into why the plans changed. It was 4:30 pm or so and I was trying to get out the door since I was not on call and wanted to get home at a normal time and leave the problems to the on-call DBA. Here is the good plan:

Plan hash value: 1314604389

| Id  | Operation                       | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | INSERT STATEMENT                |                    |       |       |  3090 (100)|          |
|   1 |  HASH JOIN RIGHT SEMI           |                    |  2311 |  3511K|  3090   (1)| 00:00:13 |
|   2 |   VIEW                          | VW_SQ_1            |   967 | 44482 |  1652   (1)| 00:00:07 |
|   3 |    HASH JOIN                    |                    |   967 | 52218 |  1652   (1)| 00:00:07 |
|   4 |     TABLE ACCESS FULL           | PS_PST_VCHR_TAO4   |    90 |  1980 |    92   (3)| 00:00:01 |
|   5 |     NESTED LOOPS                |                    | 77352 |  2417K|  1557   (1)| 00:00:07 |
|   6 |      INDEX UNIQUE SCAN          | PS_BUS_UNIT_TBL_GL |     1 |     5 |     0   (0)|          |
|   7 |      TABLE ACCESS BY INDEX ROWID| PS_DIST_LINE_TMP4  | 77352 |  2039K|  1557   (1)| 00:00:07 |
|   8 |       INDEX RANGE SCAN          | PS_DIST_LINE_TMP4  | 77352 |       |   756   (1)| 00:00:04 |
|   9 |   TABLE ACCESS BY INDEX ROWID   | PS_VCHR_TEMP_LN4   | 99664 |   143M|  1434   (1)| 00:00:06 |
|  10 |    INDEX RANGE SCAN             | PSAVCHR_TEMP_LN4   | 99664 |       |   630   (1)| 00:00:03 |

Here is the bad plan:

Plan hash value: 3334601

| Id  | Operation                          | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
|   0 | INSERT STATEMENT                   |                    |       |       |  1819 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID       | PS_VCHR_TEMP_LN4   |  2926 |  4314K|  1814   (1)| 00:00:08 |
|   2 |   NESTED LOOPS                     |                    |  2926 |  4446K|  1819   (1)| 00:00:08 |
|   3 |    VIEW                            | VW_SQ_1            |     1 |    46 |     4   (0)| 00:00:01 |
|   4 |     SORT UNIQUE                    |                    |     1 |    51 |            |          |
|   5 |      TABLE ACCESS BY INDEX ROWID   | PS_PST_VCHR_TAO4   |     1 |    23 |     1   (0)| 00:00:01 |
|   6 |       NESTED LOOPS                 |                    |     1 |    51 |     4   (0)| 00:00:01 |
|   7 |        NESTED LOOPS                |                    |     1 |    28 |     3   (0)| 00:00:01 |
|   8 |         INDEX UNIQUE SCAN          | PS_BUS_UNIT_TBL_GL |     1 |     5 |     0   (0)|          |
|   9 |         TABLE ACCESS BY INDEX ROWID| PS_DIST_LINE_TMP4  |     1 |    23 |     3   (0)| 00:00:01 |
|  10 |          INDEX RANGE SCAN          | PS_DIST_LINE_TMP4  |     1 |       |     2   (0)| 00:00:01 |
|  11 |        INDEX RANGE SCAN            | PS_PST_VCHR_TAO4   |     1 |       |     1   (0)| 00:00:01 |
|  12 |    INDEX RANGE SCAN                | PSAVCHR_TEMP_LN4   |   126K|       |  1010   (1)| 00:00:05 |

Notice that in the bad plan the Rows column has 1 in it on many of the lines, but in the good plan it has larger numbers. Something about the statistics and the values in the where clause caused the optimizer to build the bad plan as if no rows would be accessed from these tables even though many rows would be accessed. So, it made a plan based on wrong information. But I had no time to dig further. I did ask my coworker if anything had changed about this job and nothing had.

So, I created a SQL Profile script by going to the utl subdirectory under sqlt where it was installed on the database server. I generated the script by running coe_xfr_sql_profile gwv75p0fyf9ys 1314604389. I edited the created script by the name coe_xfr_sql_profile_gwv75p0fyf9ys_1314604389.sql and changed the setting force_match=>FALSE to force_match=>TRUE and ran the script. The long running job finished shortly thereafter, and no new incidents have occurred in future runs.

The only thing that confuses me is that when I run fmsstat2.sql now with ss.FORCE_MATCHING_SIGNATURE = 5442820596869317879 I do not see any runs with the good plan. Maybe future runs of the job have a different FORCE_MATCHING_SIGNATURE and the SQL Profile only helped the one job. If that is true, the future runs may have had the correct statistics and run the good plan on their own.

I wanted to post this to give an example of using force_match=>TRUE with coe_xfr_sql_profile. I had an earlier post about this subject, but I thought another example could not hurt. I also wanted to show how I use fmsstat2.sql to find multiple SQL statements by their FORCE_MATCHING_SIGNATURE value. I realize that SQL Profiles are a kind of band aid rather than a solution to the real problem. But I got out of the door by 5 pm on Monday and did not get woken up in the middle of the night so sometimes a quick fix is what you need.


Categories: DBA Blogs

Developers Decide One Cloud Isn’t Enough

OTN TechBlog - Wed, 2019-04-17 08:00


Developers have significantly greater choice today than even just a few years ago, when considering where to build, test and host their services and applications, deciding which clouds to move existing on-premises workloads to, and which of the multitude of open source projects to leverage. So why, in this new era of empowered developers and expanding choice, have so many organizations pursued a single cloud strategy?  The proliferation of new, cloud native open source projects and cloud service providers over recent years who have added capacity, functionality, tools, resources and services, has resulted in better performance, different cost models, and more choice for developers and DevOps engineers, while increasing competition among providers. This is leading into a new era of cloud choice, where the new norm will be dominated by a multi-cloud and hybrid cloud model.

As new cloud native design and development technologies like Kubernetes, serverless computing, and the maturing discipline of microservices emerge, they help accelerate, simplify, and expand deployment and development options. Users have the ability to leverage new technologies with their existing designs and deployments, and the flexibility they afford expands users’ option to run on many different platforms. Given this rapidly changing cloud landscape, it is not surprising that hybrid cloud and multi cloud strategies are being adopted by an increasing number of companies today. 

For a deeper dive into Prediction #7 of the 10 Predictions for Developers in 2019 offered by Siddhartha Agarwal, “Developers Decide One Cloud Isn’t Enough”, we look at the growing trend for companies and developers to choose more than one cloud provider. We’ll examine a few of the factors they consider, the needs determined by a company’s place in the development cycle, business objectives, and level of risk tolerance, and predict how certain choices will trend in 2019 and beyond.


Different Strokes

We are in a heterogeneous IT world today. A plethora of choice and use cases, coupled with widely varying technical and business needs and approaches to solving them, give rise to different solutions. No two are exactly the same, but development projects today typically fall within the following scenarios.

A. Born in the cloud development – these suffer little to no constraint imposed by existing applications; it is highly efficient and cost-effective to begin design in the cloud. They are naturally leveraging containers and new open source development tools like serverless (https://fnproject.io/) or service mesh platforms (e.g., Istio)  A decade ago, startup costs based on datacenter needs alone were a serious barrier to entry for budding tech companies – cloud computing has completely changed this.

B. On premises development moving to cloud – enterprises in this category have many more factors to consider. Java teams for example are rapidly adopting frameworks like Helidon and GraalVM to help them move to a microservice architecture and migrate applications to the cloud. But will greenfield development projects start only in cloud? Do they migrate legacy workloads to cloud? How do they balance existing investments with new opportunities? And what about the interface between on-premises and cloud?

C. Remaining mostly on premises but moving some services to cloud – options are expanding for those in this category. A hybrid cloud approach has been expanding, and we predict will continue to expand, over the course of at least the next few years.  The cloud native stacks available on premises now mirror the cloud native stacks in the cloud thus enabling a new generation of hybrid cloud use cases. An integrated and supported cloud native framework that spans on premises and cloud options delivers choice once again.  And, security, privacy and latency concerns will dictate some of their unique development project needs.


If It Ain’t Broke, Don’t Fix It?

IT investments are real. Inertia can be hard to overcome. Let’s look at the main reasons for not distributing workloads across multiple clouds.  

  • Economy of scale tops the list, as most cloud providers will offer discounts for customers who go all in; larger workloads on one cloud provide negotiating leverage.
  • Development staff familiarity with one chosen platform makes it easier to bring on and train new developers; ramp time to productivity increases.
  • Custom features or functionality unique to the main cloud provider may need to be removed or redesigned in moving to another platform. Even on supposedly open platforms, developers must be aware of the not-so-obvious features impacting portability.
  • Geographical location of datacenters for privacy and/or latency concerns in less well-served areas of the world may also inhibit choice, or force uncomfortable trade-offs.
  • Risk mitigation is another significant factor, as enterprises seek to balance conflicting business needs with associated risks. Lean development teams often need to choose between taking on new development work vs modernizing legacy applications, when resources are scarce.

Change is Gonna Do You Good

These are valid concerns, but as dev teams look more deeply into the robust services and offerings emerging today, the trend is to diversify.

The most frequently cited concern is that of vendor lock-in. This counter-argument to that of economy of scale says that the more difficult it is to move your workloads off of one provider, the less motivated that vendor is to help reduce your cost of operations. For SMBs (small to mid-sized businesses) without a ton of leverage in comparison to large enterprises, this can be significant. Ensuring portability of workloads is important. A comprehensive cloud native infrastructure is imperative here – one that includes container orchestration but also streaming, CI/CD, and observability and analysis (e.g, Prometheus and Grafana). Containers and Kubernetes deliver portability, provided your cloud vendor uses unmodified open source code. In this model, a developer can develop their web application on their laptop, push it into a CI/CD system on one cloud, and leverage another cloud for managed Kubernetes to run their container-based app. However, the minute you start using specific APIs from the underlying platform, moving to another platform is much more difficult. AWS Lambda is one of many examples.

Mergers, acquisitions, changing business plans or practices, or other unforeseen events may impact a business at a time when they are not equipped to deal with it. Having greater flexibility to move with changing circumstances, and not being rushed into decisions, is also important. Consider for example, the merger of an organization that uses an on-premises PaaS, such as OpenShift, merging with another organization that has leveraged the public cloud across IaaS, PaaS and SaaS. It’s important to choose interoperable technologies to anticipate these scenarios.

Availability is another reason cited by customers. A thoughtfully designed multi-cloud architecture not only offers potential negotiating power as mentioned above, but also allows for failover in case of outages, DDoS attacks, local catastrophes, and the like. Larger cloud providers with massive resources and proliferation of datacenters and multiple availability domains offer a clear advantage here, but it also behooves the consumer to distribute risk across not only datacenters, but over several providers.

Another important set of factors is related to cost and ROI. Running the same workload on multiple cloud providers to compare cost and performance can help achieve business goals, and also help inform design practices.  

Adopting open source technologies enables businesses to choose where to run their applications based on the criteria they deem most important, be they technical, cost, business, compliance, or regulatory concerns. Moving to open source thus opens up the possibility to run applications on any cloud. That is, any CNCF-certified Kubernetes managed cloud service can safely run Kubernetes – so enterprises can take advantage of this key benefit to drive a multi-cloud strategy.

The trend in 2019 is moving strongly in the direction of design practices that support all aspects of a business’s goals, with the best offers, pricing and practices from multiple providers. This direction makes enterprises more competitive – maximally productive, cost-effective, secure, available, and flexible regarding platform choice.


Design for Flexibility

Though having a multi-cloud strategy seems to be the growing trend, it does come with some inherent challenges. To address issues like interoperability among multiple providers and establishing depth of expertise with a single cloud provider, we’re seeing an increased use of different technologies that help to abstract away some of the infrastructure interoperability hiccups. This is particularly important to developers, who seek the best available technologies that fit their specific needs.

Serverless computing seeks to reduce the awareness of any notion of infrastructure. Consider it similar to water or electricity utilities – once you have attached your own minimal home infrastructure to the endpoint offered by the utility, you simply turn on the tap or light switch, and pay for what you consume. The service scales automatically – for all intents and purposes, you may consume as much output of the utility or service as desired, and the bill goes up and down accordingly. When you are not consuming the service, there is no (or almost no) overhead.  

Development teams are picking cloud vendors based on capabilities they need. This is especially true in SaaS. SaaS is a cloud-based software delivery model with payment based on usage, rather than license or support-based pricing. The SaaS provider develops, maintains and updates the software, along with the hardware, middleware, application software, and security. SaaS customers can more easily predict total cost of ownership with greater accuracy. The more modern, complete SaaS solutions also allow for greater ease of configuration and personalization, and offer embedded analytics, data portability, cloud security, support for emerging technologies, and connected, end-to-end business processes.

Serverless computing not only provides simplicity through abstraction of infrastructure, its design patterns also promote the use of third-party managed services whenever possible. This provides flexibility and allows you to choose the best solution for your problem from the growing suite of products and services available in the cloud, from software-defined networking and API gateways, to databases and managed streaming services. In this design paradigm, everything within an application that is not purely business logic can be efficiently outsourced.

More and more companies are finding it increasingly easy to connect elements together with Serverless functionality for the desired business logic and design goals. Serverless deployments talking to multiple endpoints can run almost anywhere; serverless becomes the “glue” that is used to make use of the best services available, from any provider.

Serverless deployments can be run anywhere, even on multiple cloud platforms. Hence flexibility of choice expands even further, making it arguably the best design option for those desiring portability and openness.



There are many pieces required to deliver a successful multi-cloud approach. Modern developers use specific criteria to validate if a particular cloud is “open” and whether or not it supports a multi-cloud approach. Does it have the ability to

  • extract/export data without incurring significant expense or overhead?
  • be deployed either on-premises or in the public cloud, including for custom applications, integrations between applications, etc.?
  • monitor and manage applications that might reside on-premises or in other clouds from a single console, with the ability to aggregate monitoring/management data?

And does it have a good set of APIs that enables access to everything in the UI via an API? Does it expose all the business logic and data required by the application? Does it have SSO capability across applications?

The CNCF (Cloud Native Computing Foundation) has over 400 cloud provider, user, and supporter members, and its working groups and cloud events specification engage these and thousands more in the ongoing mission to make cloud native computing ubiquitous, and allow engineers to make high-impact changes frequently and predictably with minimal toil.

We predict this trend will continue well beyond 2019 as CNCF drives adoption of this paradigm by fostering and sustaining an ecosystem of open source, vendor-neutral projects, and democratizing state-of-the-art patterns to make these innovations accessible for everyone.

Oracle is a platinum member of CNCF, along with 17 other major cloud providers. We are serious about our commitment to open source, open development practices, and sharing our expertise via technical tutorials, talks at meetups and conferences, and helping businesses succeed. Learn more and engage with us at cloudnative.oracle.com, and we’d love to hear if you agree with the predictions expressed in this post. 

Leading Pharmacy Extends 100 Year Legacy with Oracle

Oracle Press Releases - Wed, 2019-04-17 07:00
Press Release
Leading Pharmacy Extends 100 Year Legacy with Oracle Farmatodo expands operations in new countries with modern retail technology

REDWOOD SHORES, Calif. and CARACAS, Venezuela—Apr 17, 2019

Farmatodo, a leading Venezuelan self-service chain of pharmacies, has specialized in providing medicine, personal care, beauty and baby products to help consumers care for themselves and their families for more than 100 years. Through a seamless shopping experience, the company offers approximately 8000 products in more than 200 stores and online in Venezuela and Colombia. With Oracle Retail, Farmatodo has established a framework to expand into new countries, deploy new stores faster, and gained the agility to serve in-store shoppers better with a modern point of service (POS) system.

In addition, this new technology will support Farmatodo’s aggressive delivery model in Colombia. While the area is known to have challenging traffic congestion, the pharmacy offers home delivery in up to 30 minutes. To help fulfill this promise, having the real-time inventory visibility and store consistency Oracle provides is critical.

“The continuity and expansion of our retail operation depended on reducing technological risks and improving information integrity and business processes. We replaced outdated legacy systems with Oracle to create a foundation for growth in Latin America,” said Angelo Cirillo, chief information officer, Farmatodo. “Usually these projects take three years for only one country. Leveraging Oracle’s best practices and integrated solutions, we fast-tracked the implementation of two countries in two years.”

The company relies on Oracle Retail Merchandising System, Oracle Retail Store Inventory Management and Oracle Retail Warehouse Management System to manage the business at a corporate level and Oracle Retail Xstore Point-of-Service to enhance the consumer experience on the store floors.

Farmatodo selected Oracle PartnerNetwork (OPN) Platinum level member, Retail Consult to implement the latest versions of the solutions. A longtime collaborator, Retail Consult has a deep understanding of Oracle technology, retail process, and customers. The company employed a multifunctional team with a strong customer-centric approach, along with the Oracle Retail Reference Model, to chart a path to success for Farmatodo.

“To support international expansion, we faced a technological and corporate challenge. The previous experience with Oracle Retail system allowed us to fully evaluate and emulate features and functionalities before extending them throughout new and existing operations,” said Francisco Gerardo Díaz Parra, project director, Farmatodo. “The stability and data security provided by Oracle, combined with the highly skilled implementation partner and defined project governance brought us the optimal mix to integrate processes and modernize systems.”

“For thirty-plus years, we have been working hand-in-hand with global retailers to help ensure successful implementations and outcomes. The power of this combined knowledge continues to be central in delivering unmatched industry best practices and guiding innovations that are enabled by our modern platform. Our goal is to help our customers keep pace with the changes in consumer behavior and to enable them with operational agility and a clear view into their operations so they can move at the same speed,” said Mike Webster, senior vice president, and general manager, Oracle Retail. 

Contact Info
Kris Reeves
About Retail Consult

Retail Consult is a highly specialized group that has a big focus on technology solutions for retail, offering clients global perspective and experience with operations in Europe, North, South and Central America. The most senior resources average 15 years of retail experience, and the multilingual team integrates retail-specific skills in strategy, technology architecture, business process, change management, support, and management.

About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility and refine the customer experience. For more information, visit our website www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kris Reeves

  • +1.925.787.6744

Westchester Community College Uses Oracle Cloud to Modernizes Education Experience

Oracle Press Releases - Wed, 2019-04-17 07:00
Press Release
Westchester Community College Uses Oracle Cloud to Modernizes Education Experience Community college deploys Oracle Student Cloud to recruit and engage students across an expanding portfolio of learning programs

Redwood Shores, Calif.—Apr 17, 2019

Westchester Community College  is implementing Oracle Student Cloud solutions to support its goal of providing accessible, high-quality and affordable education to its diverse community. The two-year public college is affiliated with the State University of New York, the nation’s largest comprehensive public university system.

To keep pace with fast-changing workforce requirements and student expectations, institutions  such as Westchester Community College are evolving to improve student outcomes and operational efficiency. This change demands both a new model for teaching, learning and research, as well as better ways to recruit, engage and manage students throughout their lifelong learning experience.    

“We are committed to student success, academic excellence, and workforce and economic development. To deliver on those promises we needed to leverage the best technology to modernize our operations and how we engage with our students,” said Dr. Belinda Miles, president of Westchester Community College, Valhalla, N.Y. “By expanding our Oracle footprint  with Oracle Student Cloud we will be able to support a diverse array of academic programs and learning opportunitites including continuing education, while delivering better experiences to our students.”

Oracle Student Cloud solutions, including Student Management and Recruiting, will integrate seamlessly with Westchester’s existing Oracle Campus student information system. With Oracle Student Management, the school will be able to better inform existing and prospective students about classes and services, and Oracle Student Recruiting will improve and simplify the student recruitment process. The college will also be using Oracle Student Engagement to better communicate with and engage current and prospective students.

“Oracle Student Cloud enables organizations such as Westchester to promote an increasingly diverse array of academic programs for successful life-long learning.” said Vivian Wong, GVP higher education development, Oracle. “We are delighted to partner with Westchester on their cloud transformation journey.”

Supporting the entire student life cycle, Oracle Student Cloud is a complete suite of higher education cloud solutions, including Student Management, Student Recruiting, Student Engagement, and Student Financial Planning. As a set of modules, designed to work as a suite, institutions are able to choose their own incremental path to the cloud.     

Contact Info
Katie Barron
Kristin Reeves
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.


Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Kristin Reeves

  • +1.925.787.6744

VirtualBox 6.0.6

Tim Hall - Wed, 2019-04-17 02:33

VirtualBox 6.0.6 was released last night.

The downloads and changelog are in the usual places.

So far I’ve only had a chance to install it on my Windows 10 laptop at home and my Windows 10 laptop at work. No dramas on either. I’ll probably do the installations on macOS Mojave and Oracle Linux 7 hosts tonight. I’ll add an update here to say how they’ve gone.

With all the other Oracle updates that have just come out, I’ll be doing loads of Vagrant and Docker builds over the next couple of evenings, so this should get a reasonable workout.



Update: The installations on macOS Mojave and Oracle Linux 7 hosts worked fine too.

VirtualBox 6.0.6 was first posted on April 17, 2019 at 8:33 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Podcast: On the Highway to Helidon

OTN TechBlog - Tue, 2019-04-16 23:00

Are you familiar with Project Helidon? It’s an open source Java microservices framework introduced by Oracle in September of 2018.  As Helidon project lead Dmitry Kornilov explains in his article Helidon Takes Flight, "It’s possible to build microservices using Java EE, but it’s better to have a framework designed from the ground up for building microservices."

Helidon consists of a lightweight set of libraries that require no application server and can be used in Java SE applications. While these libraries can be used separately, using them in combination provides developers with a solid foundation on which to build microservices.

In this program we’ll dig into Project Helidon with a panel that consists of two people who are actively engaged in the project, and two community leaders who have used Helidon in development projects, and have also organized Helidon-focused Meet-Ups.

This program was recorded on Friday, March 8, 2019. So let’s journey through time and space and get to the conversation. Just press play in the widget.

The Panelists Dmitry Kornilov

Dmitry Kornilov
Senior Software Development Manager, Oracle; Project Lead, Project Helidon
Prague, Czech Republic


Tomas Langer

Tomas Langer
Consulting Member of Technical Staff, Oracle; Member of the Project Helidon Team
Prague, Czech Republic


Oracle ACE Associate José Rodrigues

José Rodrigues
Principal Consultant and Business Analyst, Link Consulting; Co-Organizer, Oracle Developer Meetup Lisbon
Lisbon, Portugal


Oracle ACE Phil Wilkins

Phil Wilkins
Senior Consultant, Capgemini; Co-Organizer. Oracle Developer Meetup London
Reading, UK


Relevant Resources

Help with v$statname and v$sysstat

Tom Kyte - Tue, 2019-04-16 19:06
Tom, Can you please provide info on how can I find the full table scan and index table scan activities in the database using v$statname and v$sysstat? Do I need to set TIMED_STATISTICS=TRUE before running queries against v$sysstat?...
Categories: DBA Blogs

18c Upgrade: Failed gridSetup.sh -executeConfigTools: Cluster upgrade state is [UPGRADE FINAL]

Michael Dinh - Tue, 2019-04-16 16:53

Check 18c Upgrade Results – The cluster upgrade state is [UPGRADE FINAL]

18c Upgrade Getting to Results – The cluster upgrade state is [NORMAL]

This is a multi-part series for 18c Upgrade and suggest read the above 2 posts first.

Commands for gridSetup.sh

+ /u01/ -silent -skipPrereqs -applyRU /media/patch/Jan2019/28828717 -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp -J-Doracle.install.mgmtDB=false -J-Doracle.install.mgmtDB.CDB=false -J Doracle.install.crs.enableRemoteGIMR=false
Preparing the home to patch...
Applying the patch /media/patch/Jan2019/28828717...
Successfully applied the patch.
The log can be found at: /u01/app/oraInventory/logs/GridSetupActions2019-04-16_06-19-12AM/installerPatchActions_2019-04-16_06-19-12AM.log
Launching Oracle Grid Infrastructure Setup Wizard...

The response file for this session can be found at:

You can find the log of this install session at:

As a root user, execute the following script(s):
        1. /u01/

Execute /u01/ on the following nodes:
[racnode-dc1-1, racnode-dc1-2]

Run the script on the local node first. After successful completion, you can start the script in parallel on all other nodes, except a node you designate as the last node. When all the nodes except the last node are done successfully, run the script on the last node.

Successfully Setup Software.
As install user, execute the following command to complete the configuration.
        /u01/ -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp [-silent]

+ exit

Basically, the error provided is utterly useless.

$ /u01/ -executeConfigTools -responseFile /sf_OracleSoftware/18cLinux/gridsetup_upgrade.rsp -silent
Launching Oracle Grid Infrastructure Setup Wizard...

You can find the logs of this session at:

[WARNING] [INS-43080] Some of the configuration assistants failed, were cancelled or skipped.
   ACTION: Refer to the logs or contact Oracle Support Services.

Check logs from directory /u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM

$ cd /u01/app/oraInventory/logs/GridSetupActions2019-04-16_12-59-56PM

$ ls -alrt
total 1072
-rw-r----- 1 oracle oinstall     130 Apr 16 12:59 installerPatchActions_2019-04-16_12-59-56PM.log
-rw-r----- 1 oracle oinstall       0 Apr 16 12:59 gridSetupActions2019-04-16_12-59-56PM.err
drwxrwx--- 8 oracle oinstall    4096 Apr 16 13:01 ..
-rw-r----- 1 oracle oinstall 1004378 Apr 16 13:01 gridSetupActions2019-04-16_12-59-56PM.out
-rw-r----- 1 oracle oinstall    2172 Apr 16 13:01 time2019-04-16_12-59-56PM.log ***
-rw-r----- 1 oracle oinstall   73047 Apr 16 13:01 gridSetupActions2019-04-16_12-59-56PM.log ***
drwxrwx--- 2 oracle oinstall    4096 Apr 16 13:01 .

Check time2019-04-16_12-59-56PM.log

$ cat time2019-04-16_12-59-56PM.log
 # Message # ElapsedTime # Current Time ( ms )
 # Starting step:INITIALIZE_ACTION of state:init #  0  # 1555412405106
 # Finished step:INITIALIZE_ACTION of state:init # 1 # 1555412405106
 # Starting step:EXECUTE of state:init #  0  # 1555412405108
 # Finished step:EXECUTE of state:init # 3 # 1555412405111
 # Starting step:VALIDATE of state:init #  0  # 1555412405113
 # Finished step:VALIDATE of state:init # 2 # 1555412405115
 # Starting step:TRANSITION of state:init #  0  # 1555412405115
 # Finished step:TRANSITION of state:init # 2 # 1555412405117
 # Starting step:EXECUTE of state:CRSConfigTools #  0  # 1555412405117
 # Finished step:EXECUTE of state:CRSConfigTools # 813 # 1555412405930
 # Starting step:VALIDATE of state:CRSConfigTools #  0  # 1555412405930
 # Finished step:VALIDATE of state:CRSConfigTools # 0 # 1555412405930
 # Starting step:TRANSITION of state:CRSConfigTools #  0  # 1555412405930
 # Finished step:TRANSITION of state:CRSConfigTools # 26591 # 1555412432521
 # Starting step:INITIALIZE_ACTION of state:setup #  0  # 1555412432521
 # Finished step:INITIALIZE_ACTION of state:setup # 0 # 1555412432521
 # Starting step:EXECUTE of state:setup #  0  # 1555412432522
 # Finished step:EXECUTE of state:setup # 6 # 1555412432528
 # Configuration in progress. #  0  # 1555412436788
 # Update Inventory in progress. #  0  # 1555412437768
 # Update Inventory successful. # 52612 # 1555412490380
 # Upgrading RHP Repository in progress. #  0  # 1555412490445

 # Upgrading RHP Repository failed. # 12668 # 1555412503112

 # Starting step:VALIDATE of state:setup #  0  # 1555412503215
 # Finished step:VALIDATE of state:setup # 15 # 1555412503230
 # Starting step:TRANSITION of state:setup #  0  # 1555412503230
 # Finished step:TRANSITION of state:setup # 0 # 1555412503230
 # Starting step:EXECUTE of state:finish #  0  # 1555412503230
 # Finished step:EXECUTE of state:finish # 6 # 1555412503236
 # Starting step:VALIDATE of state:finish #  0  # 1555412503237
 # Finished step:VALIDATE of state:finish # 1 # 1555412503238
 # Starting step:TRANSITION of state:finish #  0  # 1555412503238
 # Finished step:TRANSITION of state:finish # 0 # 1555412503238


Check gridSetupActions2019-04-16_12-59-56PM.log

$ grep -B2 -A100 'Executing RHPUPGRADE' gridSetupActions2019-04-16_12-59-56PM.log
INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
INFO:  [Apr 16, 2019 1:01:30 PM] Executing RHPUPGRADE
INFO:  [Apr 16, 2019 1:01:30 PM] Command /u01/ upgradeSchema -fromversion
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn.handleProcess() entered.
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn: getting configAssistantParmas.
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn: checking secretArguments.
INFO:  [Apr 16, 2019 1:01:30 PM] No arguments to pass to stdin
INFO:  [Apr 16, 2019 1:01:30 PM] ... GenericInternalPlugIn: starting read loop.
INFO:  [Apr 16, 2019 1:01:43 PM] Completed Plugin named: rhpupgrade
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.saveSession method called
INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.executeSelectedToolsInAggregate action performed
INFO:  [Apr 16, 2019 1:01:43 PM] Exiting ConfigClient.executeSelectedToolsInAggregate method
INFO:  [Apr 16, 2019 1:01:43 PM] Adding ExitStatus SUCCESS_MINUS_RECTOOL to the exit status set
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.saveSession method called
INFO:  [Apr 16, 2019 1:01:43 PM] Calling event ConfigSessionEnding
INFO:  [Apr 16, 2019 1:01:43 PM] ConfigClient.endSession method called
INFO:  [Apr 16, 2019 1:01:43 PM] Completed Configuration
INFO:  [Apr 16, 2019 1:01:43 PM] Adding ExitStatus FAILURE to the exit status set
INFO:  [Apr 16, 2019 1:01:43 PM] All forked task are completed at state setup
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Validating state <setup>

WARNING:  [Apr 16, 2019 1:01:43 PM] [WARNING] [INS-43080] Some of the configuration assistants failed, were cancelled or skipped.
   ACTION: Refer to the logs or contact Oracle Support Services.

INFO:  [Apr 16, 2019 1:01:43 PM] Advice is CONTINUE
INFO:  [Apr 16, 2019 1:01:43 PM] Completed validating state <setup>
INFO:  [Apr 16, 2019 1:01:43 PM] Verifying route success
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Executing action at state finish
INFO:  [Apr 16, 2019 1:01:43 PM] FinishAction Actions.execute called
INFO:  [Apr 16, 2019 1:01:43 PM] Finding the most appropriate exit status for the current application
INFO:  [Apr 16, 2019 1:01:43 PM] Completed executing action at state <finish>
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Moved to state <finish>
INFO:  [Apr 16, 2019 1:01:43 PM] Waiting for completion of background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Completed background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Validating state <finish>
WARNING:  [Apr 16, 2019 1:01:43 PM] Validation disabled for the state finish
INFO:  [Apr 16, 2019 1:01:43 PM] Completed validating state <finish>
INFO:  [Apr 16, 2019 1:01:43 PM] Terminating all background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Terminated all background operations
INFO:  [Apr 16, 2019 1:01:43 PM] Successfully executed the flow in SILENT mode
INFO:  [Apr 16, 2019 1:01:43 PM] Finding the most appropriate exit status for the current application
INFO:  [Apr 16, 2019 1:01:43 PM] inventory location is/u01/app/oraInventory
INFO:  [Apr 16, 2019 1:01:43 PM] Finding the most appropriate exit status for the current application

INFO:  [Apr 16, 2019 1:01:43 PM] Exit Status is -1
INFO:  [Apr 16, 2019 1:01:43 PM] Shutdown Oracle Grid Infrastructure 18c Installer
INFO:  [Apr 16, 2019 1:01:43 PM] Unloading Setup Driver

Due to Exit Status is -1 is probably why – The cluster upgrade state is [UPGRADE FINAL]

Why Upgrading RHP Repository when oracle_install_crs_ConfigureRHPS=false?

$ grep -i rhp *
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:00:04 PM] Setting value for the property:oracle_install_crs_ConfigureRHPS in the bean:CRSInstallSettings
gridSetupActions2019-04-16_12-59-56PM.log: oracle_install_crs_ConfigureRHPS                       false
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:00:37 PM] Created config job for rhpupgrade
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:00:37 PM] Selecting job named 'Upgrading RHP Repository' for retry
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Started Plugin named: rhpupgrade
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Starting 'Upgrading RHP Repository'
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Executing RHPUPGRADE
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:30 PM] Command /u01/ upgradeSchema -fromversion
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:43 PM] Completed Plugin named: rhpupgrade
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
gridSetupActions2019-04-16_12-59-56PM.log:INFO:  [Apr 16, 2019 1:01:43 PM] Upgrading RHP Repository failed.
time2019-04-16_12-59-56PM.log: # Upgrading RHP Repository in progress. #  0  # 1555412490445
time2019-04-16_12-59-56PM.log: # Upgrading RHP Repository failed. # 12668 # 1555412503112

gridsetup_upgrade.rsp is used for upgrade and pertinent info shown.

## To upgrade clusterware and/or Automatic storage management of earlier     ##
## releases                                                                  ##
##  - Fill out sections A,B,C,D and H                                        ##

# Specify the required cluster configuration

# Configure RHPS - Rapid Home Provisioning Service
# Applicable only for DOMAIN cluster configuration
# Specify 'true' if you want to configure RHP service, else specify 'false'

$ sdiff -iEZbWBst -w 150 gridsetup.rsp gridsetup_upgrade.rsp
INVENTORY_LOCATION=                                                       |  INVENTORY_LOCATION=/u01/app/oraInventory
oracle.install.option=                                                    |  oracle.install.option=UPGRADE
ORACLE_BASE=                                                              |  ORACLE_BASE=/u01/app/oracle
oracle.install.crs.config.scanType=                                       |  oracle.install.crs.config.scanType=LOCAL_SCAN
oracle.install.crs.config.ClusterConfiguration=                           |  oracle.install.crs.config.ClusterConfiguration=STANDALONE
oracle.install.crs.config.configureAsExtendedCluster=                     |  oracle.install.crs.config.configureAsExtendedCluster=false
oracle.install.crs.config.gpnp.configureGNS=                              |  oracle.install.crs.config.gpnp.configureGNS=false
oracle.install.crs.config.autoConfigureClusterNodeVIP=                    |  oracle.install.crs.config.autoConfigureClusterNodeVIP=false
oracle.install.asm.configureGIMRDataDG=                                   |  oracle.install.asm.configureGIMRDataDG=false
oracle.install.asm.configureAFD=                                          |  oracle.install.asm.configureAFD=false
oracle.install.crs.configureRHPS=                                         |  oracle.install.crs.configureRHPS=false
oracle.install.crs.config.ignoreDownNodes=                                |  oracle.install.crs.config.ignoreDownNodes=false
oracle.install.config.managementOption=                                   |  oracle.install.config.managementOption=NONE
oracle.install.crs.rootconfig.executeRootScript=                          |  oracle.install.crs.rootconfig.executeRootScript=false

ora.cvu does not report any errors.

$ crsctl stat res -w "TYPE = ora.cvu.type" -t
Name           Target  State        Server                   State details
Cluster Resources
      1        ONLINE  ONLINE       racnode-dc1-2            STABLE
$ crsctl stat res -w "TYPE = ora.cvu.type" -p|grep RESULTS | sed 's/,/\n/g'

Run rhprepos upgradeSchema -fromversion – FAILED.

$ /u01/ upgradeSchema -fromversion
PRCT-1474 : failed to run 'mgmtca' on node racnode-dc1-2.

$ ps -ef|grep pmon
oracle    9722  4804  0 19:37 pts/0    00:00:00 grep --color=auto pmon
oracle   10380     1  0 13:46 ?        00:00:01 asm_pmon_+ASM1
oracle   10974     1  0 13:46 ?        00:00:01 apx_pmon_+APX1
oracle   11218     1  0 13:47 ?        00:00:02 ora_pmon_hawk1
$ ssh racnode-dc1-2
Last login: Tue Apr 16 18:44:30 2019

Welcome to racnode-dc1-2
OracleLinux 7.3 x86_64

FQDN: racnode-dc1-2.internal.lab

Processor: Intel(R) Core(TM) i7-2640M CPU @ 2.80GHz
#CPU's:    2
Memory:    5709 MB
Kernel:    4.1.12-61.1.18.el7uek.x86_64


$ ps -ef|grep pmon
oracle    9219     1  0 13:44 ?        00:00:01 asm_pmon_+ASM2
oracle   10113     1  0 13:45 ?        00:00:01 apx_pmon_+APX2
oracle   10619     1  0 13:45 ?        00:00:01 ora_pmon_hawk2
oracle   13200 13178  0 19:37 pts/0    00:00:00 grep --color=auto pmon

In conclusion, the silent upgrade process is poorly documented at best.

Starting to wondering if the following parameters contributed to the issue:

-J-Doracle.install.mgmtDB=false -J-Doracle.install.mgmtDB.CDB=false -J Doracle.install.crs.enableRemoteGIMR=false

Check your hints carefully

Bobby Durrett's DBA Blog - Tue, 2019-04-16 16:32

Back in 2017 I wrote about how I had to disable the result cache after upgrading a database to This week I found one of our top queries and it looked like removing the result cache hints made it run 10 times faster. But this did not make sense because I disabled the result cache. Then I examined the hints closer. They looked like this:


There should be an underscore between the two words. I look up hints in the manuals and found that CACHE is a real hint. So, I tried the query with these three additional combinations:

/*+ RESULT */
/*+ CACHE */

It ran slow with the original hint and with just the CACHE hint but none of the others. So, the moral of the story is to check your hints carefully because they may not be what you think they are.


Categories: DBA Blogs

Spring Cleaning: 5 Ways to Digital Declutter and Stay Proactive

Chris Warticki - Tue, 2019-04-16 15:13
Your Digital Spring Cleaning Tips

It’s that time of year—spring cleaning is here. Time to dust off your IT strategy and make sure your security, software, systems, and support are ready for the season ahead. Follow these easy, actionable tips to ensure your business is prepared, cyber safe, protected, and compliant.


Tip 1: Commit to Continuous Innovation

Now is the time to ensure you are maximizing the value of your Oracle investment and are ready for the future. If you haven’t already done so, explore Oracle Applications Unlimited and Oracle Premier Support offerings for your covered on-premises applications, including Oracle E-Business Suite, JD Edwards EnterpriseOne, PeopleSoft, and Siebel, and take advantage of the Lifetime Support Policy for your Applications Unlimited products and the latest product roadmaps.

Tip 2: Update Your Software

Outdated software is as problematic as having no security defense at all. Ensure your Oracle software is up to date to reduce risk of cyber threats. Visit My Oracle Support to get the latest updates and details.

Looking for streamlined and more efficient upgrades with shorter cycles? Discover how you can receive continuous innovation releases for your covered Oracle Applications[1] with Oracle Applications Unlimited.

Tip 3: Patch, Patch, Patch

Security patching is essential for securing enterprise software and must be a core part of your security strategy. Failure to patch your software at the source leaves your software open to attack and your business open to risk.

According to the U.S. Department of Homeland Security, “it is necessary for all organizations to establish a strong ongoing patch management process to ensure the proper preventive measures are taken against potential threats.”

Learn more about the importance of cybersecurity (PDF) and how Oracle Support can help protect your business from cyber threats.

Tip 4: Do a Compliance Check

Many new privacy regulations are being enacted to protect businesses everywhere, such as tax, legal, and the General Data Protection Regulation (GDPR). Learn how Oracle can help you on the road to compliance.

Tip 5: Polish Up Your Skills and Expertise

Take Oracle Support Accreditation learning paths to get support best practices and tips directly from Oracle product experts. By completing the accreditation learning series, you can increase your proficiency with My Oracle Support’s core functions and build skills to help you leverage Oracle solutions, tools, and knowledge that enable productivity. Get more insights into the benefits of getting accredited.


Whether it’s spring or any other season, we encourage customers to learn more about the fundamentals of protecting their businesses with a trusted partner who both understands the importance of security and delivers ongoing and unparalleled product innovation. Visit our Oracle Premier Support website to learn more.

Learn More:


[1] Covered Oracle Applications include PeopleSoft, Oracle E-Business Suite, JD Edwards EnterpriseOne, and Siebel, excluding specified individual products that Oracle will not extend support for beyond the already committed dates.

Latest Blog Posts from Oracle ACEs: April 7-13

OTN TechBlog - Tue, 2019-04-16 14:00

Busy as bees, these ACEs have been, keeping the buzz going with another week's worth of posts offering the kind of technical experience and expertise that can help to keep you from getting stung on your next project.

Oracle ACE Director Franck PanchotFrank Panchot
Data Engineer, CERN
Lausanne, Switzerland




Oracle ACE Jhonata LamimJhonata Lamim
Senior Oracle Consultant, Exímio Soluções em TI
Santa Catarina, Brazil



Oracle ACE Marco MischkeMarco Mischke
Team Lead, Database Projects, Robotron Datenbank-Software GmbH
Dresden, Germany



Oracle ACE Noriyoshi ShinodaNoriyoshi Shinoda
Database Consultant, Hewlett Packard Enterprise
Tokyo, Japan



Oracle ACE Paul GuerinPaul Guerin
Database Service Delivery Leader, Hewlett-Packard
Twitter LinkedIn


Oracle ACE Ricardo GiampaoliRicardo Giampaoli
EPM Architect Consultant, The Hackett Group
Leinster, Ireland



Oracle ACE Rodrigo DeSouzaRodrigo de Souza
Solutions Architect, Innive Inc
Rio Grande do Sul, Brazil



Oracle ACE Sean StuberSean Stuber
Database Analyst, American Electric Power
Columbus, Ohio



Oracle ACE Stefan KoehletStefan Koehler
Independent Oracle Performance Consultant and Researcher
Bavaria, Germany




Oracle ACE Yong JingYong Jing
System Architect Manager, Changde Municipal Human Resources and Social Security Bureau
Changde City, China



Oracle ACE Associate Emad Al-MousaEmad Al-Mousa
Senior IT Consultant, Saudi Aramco
Saudi Arabia



Oracle ACE Eugene FedorenkoEugene Fedorenko
Senior Architect, Flexagon
De Pere, Wisconsin



Related Resources Oracle ACE Guide

Oracle ACE Director Oracle ACE Director: Top-tier community members who engage more closely with Oracle.
Oracle ACE Oracle ACE: Established Oracle advocates who are well known in the community.
Oracle ACE Associate Oracle ACE Associate: Entry point for the Oracle ACE program


Subscribe to Oracle FAQ aggregator