Feed aggregator

Wireguard: An easy way to build VPNs

Dietrich Schroff - Sat, 2019-04-27 03:35
Last week i came across the following tool:

If you want to build up a VPN you can choose one of the following strategies:
  • based on IPSec
  • using TLS
(These two are the options to choose - of course there are some others...)

The nice thing with wireguard (from the linux point of view) is, that the wireguard interfaces are handled like all other  network interfaces on your device.

If you are really interested in this way, you should read the whitepaper. Here some excerpts:

... IPSec ... updating these data structures based on the results of a key exchange, generally done with IKEv2 [13], itself a complicated protocol with much choice and malleability. The complexity, as well as the sheer amount of code, of this solution is considerable. Administrators have a completely separate set of firewalling semantics and secure labeling for IPsec packets.... based solution that uses TLS. By virtue of it being in user space, it has very poor performance—since packets must be copied multiple times between kernel space and user space—and a long-lived daemon is required; OpenVPN appears far from stateless to an administrator. A WireGuard interface,wg0, can be added and configured to have a tunnelIP address of10.192.122.3in a/24subnet with the standard ip(8)utilities...One design goal of WireGuard is to avoid storing any state prior to authentication and to not send any responses to unauthenticated packets. With no state stored for unauthenticated packets, and with no response generated,WireGuard is invisible to illegitimate peers and network scanners. Several classes of attacks are avoided bynot allowing unauthenticated packets to influence any state. And more generally, it is possible to implement WireGuard in a way that requires no dynamic memory allocation at all, even for authenticated packets, as explained in section 7.So next step is to install this VPN solution and see, if the administration is really so easy as promised...

Security Alert CVE-2019-2725 Released

Oracle Security Team - Fri, 2019-04-26 12:43

Oracle has just released Security Alert CVE-2019-2725.  This Security Alert was released in response to a recently-disclosed vulnerability affecting Oracle WebLogic Server.  This vulnerability affects a number of versions of Oracle WebLogic Server and has received a CVSS Base Score of 9.8.  WebLogic Server customers should refer to the Security Alert Advisory for information on affected versions and how to obtain the required patches. 

 

Please note that vulnerability CVE-2019-2725 has been associated in press reports with vulnerabilities CVE-2018-2628, CVE-2018-2893, and CVE-2017-10271.  These vulnerabilities were addressed in patches released in previous Critical Patch Update releases.

 

Due to the severity of this vulnerability, Oracle recommends that this Security Alert be applied as soon as possible.

 

For more information:

The Security Alert advisory is located at  https://www.oracle.com/technetwork/security-advisory/alert-cve-2019-2725-5466295.html 

The October 2017 Critical Patch Update advisory is located at https://www.oracle.com/technetwork/topics/security/cpuoct2017-3236626.html

The April 2018 Critical Patch Update advisory is located at https://www.oracle.com/technetwork/security-advisory/cpuapr2018-3678067.html

The July 2018 Critical patch Update advisory is located at https://www.oracle.com/technetwork/security-advisory/cpujul2018-4258247.html

Patching for Java Web Start in EBS R12.1

Online Apps DBA - Fri, 2019-04-26 04:48

Are you facing issues in Patching for Java Web Start in EBS R12.1? Then, this blog post at www.k21academy.com/appsdba50 is what you should read! The blog post tells you: ✔ Why use Java Web Start? ✔ How to Configure/patch Java Web Start in EBS(R12.1) for opening forms? ✔ How to start Java Web Start by […]

The post Patching for Java Web Start in EBS R12.1 appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

RMAN - Full vs. Incremental - performance problem

Tom Kyte - Thu, 2019-04-25 22:46
Hi. I am testing RMAN. I have run an incremental 0 and an incremental 1 cumulative test without having any database activity in-between these test. I am using OmniBack media manager with RMAN. I have three databases. Here are the timing results ...
Categories: DBA Blogs

UTA Components and License Restrictions

Anthony Shorten - Thu, 2019-04-25 20:17

The Oracle Utilities Testing Accelerator is fast becoming a part of a lot of cloud and on-premise implementations as partners and customer recognize the value of pre-built assets in automated testing to reduce costs and risk. This growth necessitates a clarification in respect to licensing of the Oracle Utilities Testing Accelerator to ensure compliance with the license.

  • Oracle Utilities product exclusive. The focus of the Oracle Utilities Testing Accelerator is to provide an optimized solution for optimizing testing of Oracle Utilities products. The Oracle Utilities Testing Accelerator is licensed for exclusive use with the Oracle Utilities products it is certified against. it will not work with product not certified as their is no content or capability inbuilt into the solution for product outside that realm.
  • Named User Plus License. The Oracle Utilities Testing Accelerator uses the Named User Plus license metric. Refer to the License Definitions and Rules for a definition of the restrictions of that license. The license cannot be shared across physical users. The license gives each licensed user access to any relevant content available for any number of any supported non-production copies of certified Oracle Utilities products (including multiple certified products and multiple certified versions).
  • Non-Production Use. The Oracle Utilities Testing Accelerator is licensed for use against non-Production copies of the certified products. It cannot be used against a Production environment.
  • All components of the Oracle Utilities Testing Accelerator are covered by the license. The Oracle Utilities Testing Accelerator is provided in a number of components including the browser based Oracle Utilities Testing Workbench (including the execution engine), Oracle Utilities Testing Repository (storing assets and results of tests), Oracle Utilities Test Asset libraries provided by Oracle,  Oracle Utilities Testing Accelerator Eclipse Plug In and the Oracle Utilities Testing Accelerator Testing API implemented on the target copy of the Oracle Utilities product you are testing. These are subject to conditions of the license. For example, you cannot use the Oracle Utilities Testing Accelerator Testing API without the use of the Oracle Utilities Testing Accelerator. Therefore you cannot install the Testing API on a Production environment or even use the API in any respect other than with Oracle Utilities Testing Accelerator (and selected other Oracle testing products).

Oracle Utilities Testing Accelerator continues to be the cost effective way of reducing testing costs associated with Oracle Utilities products on-premise and on the Oracle Cloud.

Moving from Change Handlers to Algorithms

Anthony Shorten - Thu, 2019-04-25 18:08

One of the most common questions I receive from partners is that Java based Change Handlers are not supported on the Oracle Utilities Cloud SaaS. Change Handlers are not supported for a number of key reasons:

  • Java is not Supported on Oracle Utilities SaaS Cloud. As pointed out previously, Java based extensions are not supported on the Oracle Utilities SaaS Cloud to reduce costs associated with deployment activities on the service and to restrict access to raw devices and information at the service level. We replaced Java with enhancements to both scripting and the introduction of Groovy support.
  • Change Handlers are a legacy from the history of the product. Change handlers were introduced in early versions of the products to compensate for limited algorithm entities in those early versions. Algorithms entities are points in the logic, or process, where customers/partners can manipulate data and process for extensions using algorithms. In early versions, algorithm entities were limited by the common points of extension that were made available in those versions. Over time, based upon feedback from customers and partners, the Oracle Utilities products introduced a wider range of algorithm entities that can be exploited for extensions. In fact in the latest release of C2M, there are over 370 algorithm entities available. Over the numerous years, the need for change handlers have slowly been replaced by the provision of these new or improved algorithm entities to the point where they are no longer as relevant as they once were.

On the Oracle Utilities SaaS Cloud, it is recommended to use the relevant algorithm entity with an appropriate algorithm, written in Groovy or ConfigTools based scripting rather than using Change Handlers. Customers using change handlers today are strongly encouraged to replace those change handlers with the appropriate algorithm.

Note: Customers and Partners not intending to use the Oracle Utilities SaaS Cloud can continue to use the Change Handler functionality but it is highly recommended to also consider moving to using the appropriate algorithms to reduce maintenance costs and risks.

Two New “Dive Into Containers and Cloud Native” Podcasts

OTN TechBlog - Thu, 2019-04-25 16:48

Oracle Cloud Native Services cover container orchestration and management, build pipelines, infrastructure as code, streaming and real-time telemetry. Join Kellsey Ruppel and me for two new podcasts about these services.

In the first podcast, you can learn more about three services for containers: Container Engine for Kubernetes, Cloud Infrastructure Registry, and Container Pipelines.

In the second podcast, you can learn more about Resource Manager for infrastructure as code, Streaming for event-based architectures, Monitoring for real-time telemetry, and Notifications for real-time alerts based on infrastructure changes.

You can find these and other podcasts at the Oracle Cloud Café. Please take a few minutes to listen-in and share any feedback you may have. 

Using Drop Zones to Isolate Customizations

PeopleSoft Technology Blog - Thu, 2019-04-25 13:33

The ability to customize PeopleSoft applications has always been a powerful and popular aspect of PeopleSoft products.  This power and flexibility came at a cost, however. Although customizations are valuable in that they enable customers to meet unique and important requirements that are not part of the standard delivered products, they are difficult and costly to maintain.

PeopleSoft has been working hard to enable customers to continue developing those valuable customizations, but implement them in a way that they are isolated from our delivered products.  This minimizes life cycle impact and allows customers to take new images without having to re-implement customizations each time.  Providing the ability to isolate customizations is a high priority investment for us.  We've developed several features that facilitate the ability to isolate customizations.  The latest is Drop Zones.  Drop Zones became available with PeopleTools 8.57, and customers must be on 8.57 to implement them. 

First let's look at the benefits as well as things you must consider when using Drop Zones:

Benefits
  • Customers can add custom fields and other page elements without life cycle impact
  • You have the full power of PeopleTools within Drop Zones.  You can apply PeopleCode to custom elements
  • Reduces LCM time when taking new image.  No need to re-implement customizations!
  • Works on Fluid pages
Considerations
  • It's still developer work, some of which is done in App Designer
  • Doesn't reduce implementation time for customization (the benefit is during LCM)
  • Some pages won't work with Drop Zones
  • No support for Classic pages (as of 8.57)
What Does PeopleSoft Deliver?

You can only use drop zones on pages delivered by PeopleSoft applications.  (Don't add your own--that would be a customization.)  Drop Zones will be delivered with the following application images:

  • FSCM 31
  • HCM 30
  • ELM 19
  • CRM 17

Our application teams are delivering drop zones in pages where customizations are most common.  This was determined in consultation with customers.  Typical pages have two drop zones: one at the top, the other at the bottom.  However, there may be cases with more or fewer drop zones.

How Do I Implement Drop Zones?

Move to PeopleTools 8.57 or later. Take Application images that have drop zones.

  1. Review and catalog your page customizations and determine whether they can be moved to Drop Zones.  Compare your list to delivered pages with Drop Zones.  (Lists for all applications are available on peoplesoftinfo.com>Key Concepts>Configuration.)

  2. Create subpages with customizations you want to implement (custom fields, labels, other widgets...)  In the example here, we've created a simple subpage with static text and a read-only Employee ID field.
  3. Insert your custom subpage into the Drop Zone.
  4. Configure your subpage to the component containing the page.  There may be more than one drop zone available, so make sure you choose the one you want.  Subpages are identified by labels on their group boxes.


    Your custom subpage will be dynamically inserted at runtime.  Any fields and other changes on your subpages are loaded into the component buffer along with delivered content.  Your subpages are displayed as if part of the main page definition.  End users will see no difference between custom content and delivered content.

Now this customization is isolated, and will not be affected when you take the next application image.  Your customization will be carried forward, and you don't have to re-implement it every time you take a new image.  These changes will not appear in a Compare Report.

What Can I Do to Prepare for Using Drop Zones?

Even if you are not yet on an application image that contains Drop Zones, you can prepare ahead of time, making implementation faster. 

  1. Review and catalog your page customizations and compare them against the pages with delivered drop zones in the images you will eventually uptake.  
  2. Consider which page customizations you want to implement with Drop Zones.  Prioritize them.
  3. Start building subpages containing those customizations.

When you move to the application images that contain drop zones, you can simply insert the subpages you've created as described above.

See this video for a nice example of islolating a customization with Drop Zones.

 

B2B Brands Turn to Oracle Customer Experience Cloud to Meet the Evolving Demands of Buyers

Oracle Press Releases - Thu, 2019-04-25 07:00
Blog
B2B Brands Turn to Oracle Customer Experience Cloud to Meet the Evolving Demands of Buyers

By Jeri Kelley, director of product strategy, Oracle Commerce Cloud—Apr 25, 2019

We are living in the experience economy, where a customer’s experience with a brand is inseparable from the value of the goods and services a business provides. You may be thinking that I am speaking about B2C buyers but the B2B landscape is no different. Buying a complex machine or service should be as convenient as receiving groceries ordered online.

As the expectations of B2B buyers continue to shift, what should businesses do to meet the evolving needs of today’s buyer?

Simply put—businesses need to adapt to the way people want to buy. It’s easier said than done, but it is achievable by connecting data, intelligence and experiences. And that’s exactly what we will be talking about at B2B Online this week in Chicago. Please come visit the Oracle Customer Experience (CX) Cloud team at booth #202 to say hello and learn more about Oracle Commerce Cloud, Oracle CPQ Cloud and Oracle Subscription Management Cloud.

Whether it’s launching a new site, expanding to a new channel, or exploring a new business model, businesses are choosing Oracle CX Cloud to meet the complex and ever-changing needs of B2B organizations. Brands like Motorola Solutions and Construction Specialties are at the forefront of this shift in purchasing behavior and have partnered with Oracle to streamline ecommerce buying experiences, drive unique digital touchpoints and increase business agility. 

Motorola Solutions is a global leader in mission-critical communications that make cities safer and helps communities and businesses thrive. As Motorola Solutions looked to replace a legacy commerce platform with a scalable solution that allows for growth, it turned to Oracle Commerce Cloud and Oracle CPQ Cloud to enable self-service buying to both its B2C and B2B buyers.

“With Oracle Commerce Cloud, business teams can make the updates they need at a rapid pace and our development teams get to leverage the modern architecture of the unified platform for both B2C and B2B sites,” explains John Pirog, senior director of digital development, Motorola Solutions. “Also, it was key that our commerce platform integrated with applications like Oracle CPQ Cloud to allow our customers to quote and purchase configurable products online.”

Construction Specialties, a global manufacturer of specialty building products, looked to move away from highly customized, antiquated technologies that were hindering its ecommerce growth. One of the main goals was to make it easier for its buyers to conduct business with Construction Specialties.

“Now that we are in the cloud with Oracle, including Oracle ERP Cloud and Oracle Commerce Cloud, it is a lot easier to get what we need faster whether it’s an upgrade or an extension,” says Michael Weissberg, digital marketing manager, Construction Specialties. “One example surrounds mobile capabilities. We have seen a shift in how our customers want to buy and our old site was not mobile friendly. Since launching our responsive site on Oracle Commerce Cloud, we now see greater conversion rates on mobile devices that result in higher sales. In fact, we received our highest online sale immediately after our launch.”

Meeting customer expectations is never easy, but having the flexibility to change, adopt and experiment quickly gives businesses the ability to deliver the experiences B2B buyers want at scale. We look forward to seeing you at B2B Online and be sure to attend our session on how B2B companies can succeed in the Experience Economy. If you’re not attending, please visit Oracle CX for more information.

 

Presentation Persuasion: Calls for Proposals for Upcoming Events

OTN TechBlog - Thu, 2019-04-25 05:00

Sure you've got solid technical chops, and you share your knowledge through your blog, articles, and videos. But if you want to walk it like you talk it you have to get yourself in front of a live audience and keep them awake for about an hour. If you do it right, who knows? You might just spend the time after your session signing autographs and posing for selfies to calm your new fans. The first step in accomplishing all that is to respond to calls for proposals for conferences, meet-ups, and other live events like these:

  • AUSOUG Webinar Series 2019
    Ongoing series of webinars hosted by the Australian Oracle User Group. No CFP deadline posted.
     
  • NCOAUG Training Day 2019
    CFP Deadline: May 17, 2019
    North Central Oracle Applications User Group
    Location: Oakbrook Terrace, Ill
    Event: August 1, 2017
     
  • MakeIT Conference
    CFP Deadline: May 17, 2019

    Organized by the Slovenian Oracle User Group (SIOUG).
    Event: October 14-15, 2019
     
  • HrOUG 2019
    CFP Deadline: May 27, 2019
    Organized by the Croatian Oracle Users Group
    Event: October 15-18, 2019
     
  • DOAG 2019 Conference and Exhibition
    CFP Deadline: June 3, 2019
    Organized by the Deutsche Oracle Anwendergruppe (German Oracle Users Group)>
    Location: Nürnberg, Germany
    Event: November 19-20, 2019

Good luck!

Related Content

Creating PostgreSQL users with a PL/pgSQL function

Yann Neuhaus - Thu, 2019-04-25 04:07

Sometimes you might want to create users in PostgreSQL using a function. One use case for this is, that you want to give other users the possibility to create users without granting them the right to do so. How is that possible then? Very much the same as in Oracle you can create functions in PostgreSQL that either execute under the permission of the user who created the function or they run under the permissions of the user who executes the function. Lets see how that works.

Here is a little PL/pgSQL function that creates a user with a given password, does some checks on the input parameters and tests if the user already exists:

create or replace function f_create_user ( pv_username name
                                         , pv_password text
                                         ) returns boolean
as $$
declare
  lb_return boolean := true;
  ln_count integer;
begin
  if ( pv_username is null )
  then
     raise warning 'Username must not be null';
     lb_return := false;
  end if;
  if ( pv_password is null )
  then
     raise warning 'Password must not be null';
     lb_return := false;
  end if;
  -- test if the user already exists
  begin
      select count(*)
        into ln_count
        from pg_user
       where usename = pv_username;
  exception
      when no_data_found then
          -- ok, no user with this name is defined
          null;
      when too_many_rows then
          -- this should really never happen
          raise exception 'You have a huge issue in your catalog';
  end;
  if ( ln_count > 0 )
  then
     raise warning 'The user "%" already exist', pv_username;
     lb_return := false;
  else
      execute 'create user '||pv_username||' with password '||''''||'pv_password'||'''';
  end if;
  return lb_return;
end;
$$ language plpgsql;

Once that function is created:

postgres=# \df
                                   List of functions
 Schema |     Name      | Result data type |        Argument data types         | Type 
--------+---------------+------------------+------------------------------------+------
 public | f_create_user | boolean          | pv_username name, pv_password text | func
(1 row)

… users can be created by calling this function when connected as a user with permissions to do so:

postgres=# select current_user;
 current_user 
--------------
 postgres
(1 row)

postgres=# select f_create_user('test','test');
 f_create_user 
---------------
 t
(1 row)

postgres=# \du
                                   List of roles
 Role name |                         Attributes                         | Member of 
-----------+------------------------------------------------------------+-----------
 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
 test      |                                                            | {}

Trying to execute this function with a user that does not have permissions to create other users will fail:

postgres=# create user a with password 'a';
CREATE ROLE
postgres=# grant EXECUTE on function f_create_user(name,text) to a;
GRANT
postgres=# \c postgres a
You are now connected to database "postgres" as user "a".
postgres=> select f_create_user('test2','test2');
ERROR:  permission denied to create role
CONTEXT:  SQL statement "create user test2 with password 'pv_password'"
PL/pgSQL function f_create_user(name,text) line 35 at EXECUTE

You can make that work by saying that the function should run with the permissions of the user who created the function:

create or replace function f_create_user ( pv_username name
                                         , pv_password text
                                         ) returns boolean
as $$
declare
  lb_return boolean := true;
  ln_count integer;
begin
...
end;
$$ language plpgsql security definer;

From now on our user “a” is allowed to create other users:

postgres=> select current_user;
 current_user 
--------------
 a
(1 row)

postgres=> select f_create_user('test2','test2');
 f_create_user 
---------------
 t
(1 row)

postgres=> \du
                                   List of roles
 Role name |                         Attributes                         | Member of 
-----------+------------------------------------------------------------+-----------
 a         |                                                            | {}
 postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS | {}
 test      |                                                            | {}
 test2     |                                                            | {}

Before implementing something like this consider the “Writing SECURITY DEFINER Functions Safely” section in the documentation, there are some points to consider such as this:

postgres=# revoke all on function f_create_user(name,text) from public;
REVOKE

… and correctly setting the search_path.

Cet article Creating PostgreSQL users with a PL/pgSQL function est apparu en premier sur Blog dbi services.

Oracle Solaris Continuous Delivery

Chris Warticki - Wed, 2019-04-24 21:45
Protect Your Mission-Critical Environments and Resources

Oracle is committed to protect your investment in Oracle Solaris and is mindful of your mission-critical environments and resources. Since January 2017, Oracle Solaris 11 follows a Continuous Delivery model[1], delivering new functionality as updates to the existing release. This means you are not required to upgrade to gain access to new features and capabilities. Oracle Support provides support for Oracle Solaris 11 through at least 2031 under Oracle Premier Support and Extended Support through 2034. These dates will be evaluated annually for updates.

With no additional cost, Oracle Solaris 11 customers under Oracle Premier Support can benefit from:

  • New features, bug fixes, and security vulnerability updates through both dot releases, and monthly Support Repository Updates (SRUs).
  • Binary Application Guarantee—guaranteed compatibility of your application from one Oracle Solaris release to the next, on premises or in the cloud.
  • 24/7 access via My Oracle Support to:
    • Oracle systems specialists
    • Online knowledge articles
    • Oracle Solaris proactive tools
    • Oracle Solaris communities 

 

In addition, the agile deployment of Oracle Solaris 11 allows customers to take advantage of more frequent, simpler, and less disruptive delivery of new features and capabilities. This helps with faster adoption and eliminates the need for slow and expensive re-qualifications. 

Learn more with these resources:

Policies

Oracle Solaris 11

Oracle Solaris 10

General

 

 

 

[1] Support dates are evaluated for update annually, and will be provided through at least the dates listed in the Oracle and Sun System Software: Lifetime Support Policy. See the Oracle Solaris Releases for details.

Solving DBFS UnMounting Issue

Michael Dinh - Wed, 2019-04-24 18:34

Often, I am quite baffle with Oracle’s implementations and documentations.

RAC GoldenGate DBFS implementation has been a nightmare and here is one example DBFS Nightmare

I am about to show you another.

In general, I find any implementation using ACTION_SCRIPT is good in theory, bad in practice, but I digress.

Getting ready to shutdown CRS for system patching to find out CRS failed to shutdown.

# crsctl stop crs
CRS-2675: Stop of 'dbfs_mount' on 'host02' failed
CRS-2675: Stop of 'dbfs_mount' on 'host02' failed
CRS-2673: Attempting to stop 'dbfs_mount' on 'host02'
CRS-2675: Stop of 'dbfs_mount' on 'host02' failed
CRS-2673: Attempting to stop 'dbfs_mount' on 'host02'
CRS-2675: Stop of 'dbfs_mount' on 'host02' failed
CRS-2799: Failed to shut down resource 'dbfs_mount' on 'host02'
CRS-2799: Failed to shut down resource 'ora.GG_PROD.dg' on 'host02'
CRS-2799: Failed to shut down resource 'ora.asm' on 'host02'
CRS-2799: Failed to shut down resource 'ora.dbfs.db' on 'host02'
CRS-2799: Failed to shut down resource 'ora.host02.ASM2.asm' on 'host02'
CRS-2794: Shutdown of Cluster Ready Services-managed resources on 'host02' has failed
CRS-2675: Stop of 'ora.crsd' on 'host02' failed
CRS-2799: Failed to shut down resource 'ora.crsd' on 'host02'
CRS-2795: Shutdown of Oracle High Availability Services-managed resources on 'host02' has failed
CRS-4687: Shutdown command has completed with errors.
CRS-4000: Command Stop failed, or completed with errors.

Check /var/log/messages to find errors and there’s no clue for resolution.

# grep -i dbfs /var/log/messages
Apr 17 19:42:26 host02 DBFS_/ggdata: unmounting DBFS from /ggdata
Apr 17 19:42:26 host02 DBFS_/ggdata: umounting the filesystem using '/bin/fusermount -u /ggdata'
Apr 17 19:42:26 host02 DBFS_/ggdata: Stop - stopped, but still mounted, error

Apr 17 20:45:59 host02 DBFS_/ggdata: mount-dbfs.sh mounting DBFS at /ggdata from database DBFS
Apr 17 20:45:59 host02 DBFS_/ggdata: /ggdata already mounted, use mount-dbfs.sh stop before attempting to start

Apr 17 21:01:29 host02 DBFS_/ggdata: unmounting DBFS from /ggdata
Apr 17 21:01:29 host02 DBFS_/ggdata: umounting the filesystem using '/bin/fusermount -u /ggdata'
Apr 17 21:01:29 host02 DBFS_/ggdata: Stop - stopped, but still mounted, error
Apr 17 21:01:36 host02 dbfs_client[71957]: OCI_ERROR 3114 - ORA-03114: not connected to ORACLE
Apr 17 21:01:41 host02 dbfs_client[71957]: /FS1/dirdat/ih000247982 Block error RC:-5

Apr 17 21:03:06 host02 DBFS_/ggdata: unmounting DBFS from /ggdata
Apr 17 21:03:06 host02 DBFS_/ggdata: umounting the filesystem using '/bin/fusermount -u /ggdata'
Apr 17 21:03:06 host02 DBFS_/ggdata: Stop - stopped, now not mounted
Apr 17 21:09:19 host02 DBFS_/ggdata: filesystem /ggdata not currently mounted, no need to stop

Apr 17 22:06:16 host02 DBFS_/ggdata: mount-dbfs.sh mounting DBFS at /ggdata from database DBFS
Apr 17 22:06:17 host02 DBFS_/ggdata: ORACLE_SID is DBFS2
Apr 17 22:06:17 host02 DBFS_/ggdata: doing mount /ggdata using SID DBFS2 with wallet now
Apr 17 22:06:18 host02 DBFS_/ggdata: Start -- ONLINE

The messages in log of script agent show below error (MOS documentation).
Anyone know where script agent is located at?

2019-04-17 20:56:02.793903 :    AGFW:3274315520: {1:53477:37077} Agent received the message: AGENT_HB[Engine] ID 12293:16017523
2019-04-17 20:56:19.124667 :CLSDYNAM:3276416768: [dbfs_mount]{1:53477:37077} [check] Executing action script: /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh[check]
2019-04-17 20:56:19.176927 :CLSDYNAM:3276416768: [dbfs_mount]{1:53477:37077} [check] Checking status now
2019-04-17 20:56:19.176973 :CLSDYNAM:3276416768: [dbfs_mount]{1:53477:37077} [check] Check -- ONLINE
2019-04-17 20:56:32.794287 :    AGFW:3274315520: {1:53477:37077} Agent received the message: AGENT_HB[Engine] ID 12293:16017529
2019-04-17 20:56:43.312534 :    AGFW:3274315520: {2:37893:29307} Agent received the message: RESOURCE_STOP[dbfs_mount host02 1] ID 4099:16017535
2019-04-17 20:56:43.312574 :    AGFW:3274315520: {2:37893:29307} Preparing STOP command for: dbfs_mount host02 1
2019-04-17 20:56:43.312584 :    AGFW:3274315520: {2:37893:29307} dbfs_mount host02 1 state changed from: ONLINE to: STOPPING
2019-04-17 20:56:43.313088 :CLSDYNAM:3276416768: [dbfs_mount]{2:37893:29307} [stop] Executing action script: /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh[stop]
2019-04-17 20:56:43.365201 :CLSDYNAM:3276416768: [dbfs_mount]{2:37893:29307} [stop] unmounting DBFS from /ggdata
2019-04-17 20:56:43.415516 :CLSDYNAM:3276416768: [dbfs_mount]{2:37893:29307} [stop] umounting the filesystem using '/bin/fusermount -u /ggdata'
2019-04-17 20:56:43.415541 :CLSDYNAM:3276416768: [dbfs_mount]{2:37893:29307} [stop] /bin/fusermount: failed to unmount /ggdata: Device or resource busy
2019-04-17 20:56:43.415552 :CLSDYNAM:3276416768: [dbfs_mount]{2:37893:29307} [stop] Stop - stopped, but still mounted, error
2019-04-17 20:56:43.415611 :    AGFW:3276416768: {2:37893:29307} Command: stop for resource: dbfs_mount host02 1 completed with status: FAIL
2019-04-17 20:56:43.415929 :CLSFRAME:3449863744:  TM [MultiThread] is changing desired thread # to 3. Current # is 2
2019-04-17 20:56:43.415970 :CLSDYNAM:3276416768: [dbfs_mount]{2:37893:29307} [check] Executing action script: /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh[check]
2019-04-17 20:56:43.416033 :    AGFW:3274315520: {2:37893:29307} Agent sending reply for: RESOURCE_STOP[dbfs_mount host02 1] ID 4099:16017535
2019-04-17 20:56:43.467939 :CLSDYNAM:3276416768: [dbfs_mount]{2:37893:29307} [check] Checking status now
2019-04-17 20:56:43.467964 :CLSDYNAM:3276416768: [dbfs_mount]{2:37893:29307} [check] Check -- ONLINE

ACTION_SCRIPT can be found using crsctl as shown if you have not found where log of script agent is at.

oracle@host02 ~ $ $GRID_HOME/bin/crsctl stat res -w "TYPE = local_resource" -p | grep mount-dbfs.sh
ACTION_SCRIPT=/u02/app/12.1.0/grid/crs/script/mount-dbfs.sh
ACTION_SCRIPT=/u02/app/12.1.0/grid/crs/script/mount-dbfs.sh

Here’s a test case to resolve “failed to unmount /ggdata: Device or resource busy”.

First thought was to use fuser and kill the process.

# fuser -vmM /ggdata/
                     USER        PID ACCESS COMMAND
/ggdata:             root     kernel mount /ggdata
                     mdinh     85368 ..c.. bash
                     mdinh     86702 ..c.. vim
# 

Second thought, might not be a good idea and better idea is to let the script handle this if it can.
Let’s see what options are available for mount-dbfs.sh:

oracle@host02 ~ $ /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh -h
Usage: mount-dbfs.sh { start | stop | check | status | restart | clean | abort | version }

oracle@host02 ~ $ /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh version
20160215
oracle@host02 ~ 

Stop DBFS failed as expected.

oracle@host02 ~ $ /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh status
Checking status now
Check -- ONLINE

oracle@host02 ~ $ /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh stop
unmounting DBFS from /ggdata
umounting the filesystem using '/bin/fusermount -u /ggdata'
/bin/fusermount: failed to unmount /ggdata: Device or resource busy
Stop - stopped, but still mounted, error
oracle@host02 ~ $

Stop DBFS using clean option. Notice PID killed is 40047 and not the same as (mdinh 86702 ..c.. vim)
Note: not all output displayed for brevity.

oracle@host02 ~ $ /bin/bash -x /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh clean
+ msg='cleaning up DBFS nicely using (fusermount -u|umount)'
+ '[' info = info ']'
+ /bin/echo cleaning up DBFS nicely using '(fusermount' '-u|umount)'
cleaning up DBFS nicely using (fusermount -u|umount)
+ /bin/logger -t DBFS_/ggdata -p user.info 'cleaning up DBFS nicely using (fusermount -u|umount)'
+ '[' 1 -eq 1 ']'
+ /bin/fusermount -u /ggdata
/bin/fusermount: failed to unmount /ggdata: Device or resource busy
+ /bin/sleep 1
+ FORCE_CLEANUP=0
+ '[' 0 -gt 1 ']'
+ /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh status
+ '[' 0 -eq 0 ']'
+ FORCE_CLEANUP=1
+ '[' 1 -eq 1 ']'
+ logit error 'tried (fusermount -u|umount), still mounted, now cleaning with (fusermount -u -z|umount -f) and kill'
+ type=error
+ msg='tried (fusermount -u|umount), still mounted, now cleaning with (fusermount -u -z|umount -f) and kill'
+ '[' error = info ']'
+ '[' error = error ']'
+ /bin/echo tried '(fusermount' '-u|umount),' still mounted, now cleaning with '(fusermount' -u '-z|umount' '-f)' and kill
tried (fusermount -u|umount), still mounted, now cleaning with (fusermount -u -z|umount -f) and kill
+ /bin/logger -t DBFS_/ggdata -p user.error 'tried (fusermount -u|umount), still mounted, now cleaning with (fusermount -u -z|umount -f) and kill'
+ '[' 1 -eq 1 ']'
================================================================================
+ /bin/fusermount -u -z /ggdata
================================================================================
+ '[' 1 -eq 1 ']'
++ /bin/ps -ef
++ /bin/grep -w /ggdata
++ /bin/grep dbfs_client
++ /bin/grep -v grep
++ /bin/awk '{print $2}'
+ PIDS=40047
+ '[' -n 40047 ']'
================================================================================
+ /bin/kill -9 40047
================================================================================
++ /bin/ps -ef
++ /bin/grep -w /ggdata
++ /bin/grep mount.dbfs
++ /bin/grep -v grep
++ /bin/awk '{print $2}'
+ PIDS=
+ '[' -n '' ']'
+ exit 1

oracle@host02 ~ $ /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh status
Checking status now
Check -- OFFLINE

oracle@host02 ~ $ /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh status
Checking status now
Check -- OFFLINE

oracle@host02 ~ $ df -h|grep /ggdata
/dev/asm/acfs_vol-177                                  299G  2.8G  297G   1% /ggdata1
dbfs-@DBFS:/                                            60G  1.4G   59G   3% /ggdata

oracle@host02 ~ $ /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh status
Checking status now
Check -- ONLINE
oracle@host02 ~ $ 

What PID was killed when running mount-dbfs.sh clean?
It’s for dbfs_client.

mdinh@host02 ~ $ ps -ef|grep dbfs
oracle   34865     1  0 Mar29 ?        00:02:43 oracle+ASM1_asmb_dbfs1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
oracle   40047     1  0 Apr22 ?        00:00:10 /u01/app/oracle/product/12.1.0/db_1/bin/dbfs_client /@DBFS -o allow_other,direct_io,wallet /ggdata
oracle   40081     1  0 Apr22 ?        00:00:27 oracle+ASM1_user40069_dbfs1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
mdinh    88748 87565  0 13:30 pts/1    00:00:00 grep --color=auto dbfs
mdinh@host02 ~ $ 

It would have been so much better for mount-dbfs.sh to provide the info as part of the kill versus having user go debug script and trace process.

If you have read this far, then it’s only fair to provide log of script agent.

$ grep mount-dbfs.sh $ORACLE_BASE/diag/crs/$(hostname -s)/crs/trace/crsd_scriptagent_oracle.trc | grep "2019-04-17 20:5"

Oracle Powers Full BIM Model Coordination for Design and Construction Teams

Oracle Press Releases - Wed, 2019-04-24 17:07
Press Release
Oracle Powers Full BIM Model Coordination for Design and Construction Teams Aconex cloud solution connects all project participants in model management process to eliminate complexity and speed project success

Future of Projects, Washington D.C.—Apr 24, 2019

Building information modeling (BIM) is an increasingly important component of construction project delivery, but is currently limited by a lack of collaboration, reliance on multiple applications, and missing integrations. The Oracle Aconex Model Coordination Cloud Service eliminates these challenges by enabling construction design and project professionals to collaboratively manage BIM models across the entire project team in a true common data environment (CDE). As such, organizations can reduce the risk of errors and accelerate project success by ensuring each team member has access to accurate, up-to-date models. 

The BIM-methodology uses 3D, 4D and 5D modeling, in coordination with a number of tools and technologies, to provide digital representations of the physical and functional characteristics of places.

“Issues with model management means projects go over budget, run over schedule, and end up with a higher total cost of ownership for the client. As part of the early access program for Oracle Aconex Model Coordination, it was great to experience how Oracle has solved these challenges,” said Davide Gatti, digital manager, Multiplex.

Single Source of Truth for Project Data

With Oracle Aconex Model Coordination, organizations can eliminate the need for various point solutions in favor of project-wide BIM participation that drives productivity with faster processes and cycle times, enables a single source of truth for project information, and delivers a fully connected data set at handover for asset operation.

The Model Coordination solution enhances Oracle Aconex’s existing CDE capabilities, which are built around Open BIM standards (e.g., IFC 4 and BCF 2.1) and leverage a cloud-based, full model server to enable efficient, secure, and comprehensive model management at all stages of the project lifecycle.

The Oracle Aconex CDE, which is based on ISO 19650 and DIN SPEC 91391 definitions, provides industry-leading neutrality, security, and data interoperability. By enabling model management in this environment, Oracle Aconex unlocks new levels of visibility, coordination, and productivity across people and processes, including enabling comprehensive model-based issue and clash management.    

Key features of the new solution include: 

  • Seamless clash and design issue management and resolution
  • Dashboard overview and reporting
  • Creation of viewpoints – e.g. personal “bookmarks” within models and the linking of documents to objects
  • Integrated measurements
  • Process support and a full audit trail with the supply chain

“With Oracle Aconex Model Coordination, we’re making the whole model management process as seamless and easy as possible. By integrating authoring and validation applications to the cloud, users don’t need to upload and download their issues and clashes anymore,” said Frank Weiss, director of new products, BIM and innovation at Oracle Construction and Engineering.

“There’s so much noise and confusion around BIM and CDEs, much of it driven by misinformation in the market about what each term means. We believe everybody on a BIM project should work with the best available tool for their discipline. Therefore, open formats are critical for interoperability, and the use of a true CDE is key to efficient and effective model management.”

For more information on the Model Coordination solution, please visit https://www.oracle.com/industries/construction-engineering/aconex-products.html.

Contact Info
Judi Palmer
Oracle
+1.650.784.7901
judi.palmer@oracle.com
Brent Curry
H+K Strategies
+1.312.255.3086
brent.curry@hkstrategies.com
About Oracle Construction and Engineering

Asset owners and project leaders rely on Oracle Construction and Engineering solutions for the visibility and control, connected supply chain, and data security needed to drive performance and mitigate risk across their processes, projects, and organization. Our scalable cloud solutions enable digital transformation for teams that plan, build, and operate critical assets, improving efficiency, collaboration, and change control across the project lifecycle. www.oracle.com/construction-and-engineering.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle, please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Judi Palmer

  • +1.650.784.7901

Brent Curry

  • +1.312.255.3086

Direct NFS, ODM 4.0 in 12.2: archiver stuck situation after a shutdown abort and restart

Yann Neuhaus - Wed, 2019-04-24 12:23

A customer had an interesting case recently. Since Oracle 12.2. he got archiver stuck situations after a shutdown abort and restart. I reproduced the issue and it is caused by direct NFS since running ODM 4.0 (i.e. since 12.2.). The issue also reproduced on 18.5. When direct NFS is enabled then the archiver-process writes to a file with a preceding dot in its name. E.g.


.arch_1_90_985274359.arc

When the file has been fully copied from the online redolog, then it is renamed to not contain the preceding dot anymore. I.e. using the previous example:


arch_1_90_985274359.arc

When I do a “shutdown abort” while the archiver is in process of writing to the archive-file (with the leading dot in its name) and I do restart the database then Oracle is not able to cope with that file. I.e. in the alert-log I do get the following errors:


2019-04-17T10:22:33.190330+02:00
ARC0 (PID:12598): Unable to create archive log file '/arch_backup/gen183/archivelog/arch_1_90_985274359.arc'
2019-04-17T10:22:33.253476+02:00
Errors in file /u01/app/oracle/diag/rdbms/gen183/gen183/trace/gen183_arc0_12598.trc:
ORA-19504: failed to create file "/arch_backup/gen183/archivelog/arch_1_90_985274359.arc"
ORA-17502: ksfdcre:8 Failed to create file /arch_backup/gen183/archivelog/arch_1_90_985274359.arc
ORA-17500: ODM err:File exists
2019-04-17T10:22:33.254078+02:00
ARC0 (PID:12598): Error 19504 Creating archive log file to '/arch_backup/gen183/archivelog/arch_1_90_985274359.arc'
ARC0 (PID:12598): Stuck archiver: inactive mandatory LAD:1
ARC0 (PID:12598): Stuck archiver condition declared

The DB continues to operate normal until it has to overwrite the online redologfile, which has not been fully archived yet. At that point the archiver becomes stuck and modifications on the DB are no longer possible.

When I remove the incomplete archive-file then the DB continues to operate normally:


rm .arch_1_90_985274359.arc

Using a 12.1-Database with ODM 3.0 I didn’t see that behavior. I.e. I could also see an archived redologfile with a preceding dot in its name, but when I shutdown abort and restart then Oracle removed the file itself and there was no archiver problem.

Testcase:

1.) make sure you have direct NFS enabled


cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk dnfs_on

2.) configure a mandatory log archive destination pointing to a NFS-mounted filesystem. E.g.


[root]# mount -t nfs -o rw,bg,hard,rsize=32768,wsize=32768,vers=3,nointr,timeo=600,proto=tcp,suid,nolock,noac nfs_server:/arch_backup /arch_backup
 
SQL> alter system set log_archive_dest_1='location=/arch_backup/gen183/archivelog mandatory reopen=30';

3.) Produce some DML-load on the DB

I created 2 tables t3 and t4 as a copy of all_objects with approx 600’000 rows:


SQL> create table t3 as select * from all_objects;
SQL> insert into t3 select * from t3;
SQL> -- repeat above insert until you have 600K rows in t3
SQL> commit;
SQL> create table t4 as select * from t3;

Run the following PLSQL-block to produce redo:


begin
for i in 1..20 loop
delete from t3;
commit;
insert into t3 select * from t4;
commit;
end loop;
end;
/

4.) While the PLSQL-block of 3.) is running check the archive-files produced in your log archive destination


ls -ltra /arch_backup/gen183/archivelog

Once you see a file created with a preceding dot in its name then shutdown abort the database:


oracle@18cR0:/arch_backup/gen183/archivelog/ [gen183] ls -ltra /arch_backup/gen183/archivelog
total 2308988
drwxr-xr-x. 3 oracle oinstall 23 Apr 17 10:13 ..
-r--r-----. 1 oracle oinstall 2136861184 Apr 24 18:24 arch_1_104_985274359.arc
drwxr-xr-x. 2 oracle oinstall 69 Apr 24 18:59 .
-rw-r-----. 1 oracle oinstall 2090587648 Apr 24 18:59 .arch_1_105_985274359.arc
 
SQL> shutdown abort

5.) If the file with the preceding dot is still there after the shutdown then you reproduced the issue. Just startup the DB and “tail -f” your alert-log-file.


oracle@18cR0:/arch_backup/gen183/archivelog/ [gen183] cdal
oracle@18cR0:/u01/app/oracle/diag/rdbms/gen183/gen183/trace/ [gen183] tail -f alert_gen183.log
...
2019-04-24T19:01:24.775991+02:00
Oracle instance running with ODM: Oracle Direct NFS ODM Library Version 4.0
...
2019-04-24T19:01:43.770196+02:00
ARC0 (PID:8876): Unable to create archive log file '/arch_backup/gen183/archivelog/arch_1_105_985274359.arc'
2019-04-24T19:01:43.790546+02:00
Errors in file /u01/app/oracle/diag/rdbms/gen183/gen183/trace/gen183_arc0_8876.trc:
ORA-19504: failed to create file "/arch_backup/gen183/archivelog/arch_1_105_985274359.arc"
ORA-17502: ksfdcre:8 Failed to create file /arch_backup/gen183/archivelog/arch_1_105_985274359.arc
ORA-17500: ODM err:File exists
ARC0 (PID:8876): Error 19504 Creating archive log file to '/arch_backup/gen183/archivelog/arch_1_105_985274359.arc'
ARC0 (PID:8876): Stuck archiver: inactive mandatory LAD:1
ARC0 (PID:8876): Stuck archiver condition declared
...

This is a serious problem, because it may cause an archiver stuck problem after a crash. I opened a Service Request at Oracle. The SR has been assigned to the ODM-team now. Once I get a resolution I’ll update this Blog.

Cet article Direct NFS, ODM 4.0 in 12.2: archiver stuck situation after a shutdown abort and restart est apparu en premier sur Blog dbi services.

Subtract time from a constant time

Tom Kyte - Wed, 2019-04-24 10:06
Hello there, I want to create a trigger that will insert a Time difference value into a table Example: I have attendance table Sign_in date; Sign_out date; Late_in number; Early_out number; Now I want to create a trigger that will insert la...
Categories: DBA Blogs

Clob column in RDBMS(oracle) table with key value pairs

Tom Kyte - Wed, 2019-04-24 10:06
In our product in recent changes, oracle tables are added with clob column having key value pairs in xml/json format with new columns. <b>example of employee:(Please ignore usage of parenthesis) </b> 100,Adam,{{"key": "dept", "value": "Marketi...
Categories: DBA Blogs

current value of sequence

Tom Kyte - Wed, 2019-04-24 10:06
Hi. Simple question :-) Is it possible to check current value of sequence? I though it is stored in SEQ$ but that is not true (at least in 11g). So is it now possible at all? Regards
Categories: DBA Blogs

Build it Yourself — Chatbot API with Keras/TensorFlow Model

Andrejus Baranovski - Wed, 2019-04-24 08:59
Is not that complex to build your own chatbot (or assistant, this word is a new trendy term for chatbot) as you may think. Various chatbot platforms are using classification models to recognize user intent. While obviously, you get a strong heads-up when building a chatbot on top of the existing platform, it never hurts to study the background concepts and try to build it yourself. Why not use a similar model yourself. Chatbot implementation main challenges are:
  1. Classify user input to recognize intent (this can be solved with Machine Learning, I’m using Keras with TensorFlow backend)
  2. Keep context. This part is programming and there is nothing much ML related here. I’m using Node.js backend logic to track conversation context (while in context, typically we don’t require a classification for user intents — user input is treated as answers to chatbot questions)
Complete source code for this article with readme instructions is available on my GitHub repo (open source).

This is the list of Python libraries which are used in the implementation. Keras deep learning library is used to build a classification model. Keras runs training on top of TensorFlow backend. Lancaster stemming library is used to collapse distinct word forms:


Chatbot intents and patterns to learn are defined in a plain JSON file. There is no need to have a huge vocabulary. Our goal is to build a chatbot for a specific domain. Classification model can be created for small vocabulary too, it will be able to recognize a set of patterns provided for the training:


Before we could start with classification model training, we need to build vocabulary first. Patterns are processed to build a vocabulary. Each word is stemmed to produce generic root, this would help to cover more combinations of user input:


This is the output of vocabulary creation. There are 9 intents (classes) and 82 vocabulary words:


Training would not run based on the vocabulary of words, words are meaningless for the machine. We need to translate words into bags of words with arrays containing 0/1. Array length will be equal to vocabulary size and 1 will be set when a word from the current pattern is located in the given position:


Training data — X (pattern converted into array [0,1,0,1…, 0]), Y (intents converted into array [1, 0, 0, 0,…,0], there will be single 1 for intents array). Model is built with Keras, based on three layers. According to my experiments, three layers provide good results (but it all depends on training data). Classification output will be multiclass array, which would help to identify encoded intent. Using softmax activation to produce multiclass classification output (result returns an array of 0/1: [1,0,0,…,0] — this set identifies encoded intent):


Compile Keras model with SGD optimizer:


Fit the model — execute training and construct classification model. I’m executing training in 200 iterations, with batch size = 5:


Model is built. Now we can define two helper functions. Function bow helps to translate user sentence into a bag of words with array 0/1:


Check this example — translating the sentence into a bag of words:


When the function finds a word from the sentence in chatbot vocabulary, it sets 1 into the corresponding position in the array. This array will be sent to be classified by the model to identify to what intent it belongs:


It is a good practice to save the trained model into a pickle file to be able to reuse it to publish through Flask REST API:


Before publishing model through Flask REST API, is always good to run an extra test. Use model.predict function to classify user input and based on calculated probability return intent (multiple intents can be returned):


Example to classify sentence:


The intent is calculated correctly:


To publish the same function through REST endpoint, we can wrap it into Flask API:


I have explained how to implement the classification part. In the GitHub repo referenced at the beginning of the post, you will find a complete example of how to maintain the context. Context is maintained by logic written in JavaScript and running on Node.js backend. Context flow must be defined in the list of intents, as soon as the intent is classified and backend logic finds a start of the context — we enter into the loop and ask related questions. How advanced is context handling all depends on the backend implementation (this is beyond Machine Learning scope at this stage).

Chatbot UI:

Oracle Linux certified under Common Criteria and FIPS 140-2

Oracle Security Team - Wed, 2019-04-24 08:40

Oracle Linux 7 has just received both a Common Criteria (CC) Certification which was performed against the National Information Assurance Partnership (NIAP) General Purpose Operating System Protection Profile (OSPP) v4.1 as well as a FIPS 140-2 validation of its cryptographic modules.  Oracle Linux is currently one of only two operating systems – and the only Linux distribution – on the NIAP Product Compliant List. 

U.S. Federal procurement policy requires IT products sold to the Department of Defense (DoD) to be on this list; therefore, Federal cloud customers who select Oracle Cloud Infrastructure can now opt for a NIAP CC-certified operating system that also includes FIPS 140-2 validated cryptographic modules, by making Oracle Linux 7 the platform for their cloud services solution.

Common Criteria Certification for Oracle Linux 7

The National Information Assurance Partnership (NIAP) is “responsible for U.S. implementation of the Common Criteria, including management of the NIAP Common Criteria Evaluation and Validation Scheme (CCEVS) validation body.”(See About NIAP at https://www.niap-ccevs.org/ )

The Operating Systems Protection Profile (OSPP) series are the only NIAP-approved Protection Profiles for operating systems. “A Protection Profile is an implementation-independent set of security requirements and test activities for a particular technology that enables achievable, repeatable, and testable (CC) evaluations.”  They are intended to “accurately describe the security functionality of the systems being certified in terms of [CC] and to define functional and assurance requirements for such products.”  In other words, the OSPP enables organizations to make an accurate comparison of operating systems security functions. (For both quotations, see NIAP Frequently Asked Questions (FAQ) at https://www.niap-ccevs.org/Ref/FAQ.cfm)

In addition, products that certify against these Protection Profiles can also help you meet certain US government procurement rules.  As set forth in the Committee on National Security Systems Policy (CNSSP) #11, National Policy Governing the Acquisition of Information Assurance (IA) and IA-Enabled Information Technology Products (published in June 2013), “All [common off-the-shelf] COTS IA and IA-enabled IT products acquired for use to protect information on NSS shall comply with the requirements of the NIAP program in accordance with NSA-approved processes.”  

Oracle Linux is now the only Linux distribution on the NIAP Product Compliant List.  It is one of only two operating systems on the list.

You may recall that Linux distributions (including Oracle Linux) have previously completed Common Criteria evaluations (mostly against a German standard protection profile), these evaluations are now limited because they are only officially recognized in Germany and within the European SOG-IS agreement. Furthermore, the revised Common Criteria Recognition Arrangement (CCRA) announcement on the CCRA News Page from September 8th 2014, states that “After September 8th 2017, mutually recognized certificates will either require protection profile-based evaluations or claim conformance to evaluation assurance levels 1 through 2 in accordance with the new CCRA.”  That means evaluations conducted within the CCRA acceptance rules, such as the Oracle Linux 7.3 evaluation, are globally recognized in the 30 countries that have signed the CCRA. As a result, Oracle Linux 7.3 is the only Linux distribution that meets current US procurement rules.

It is important to recognize that the exact status of the certifications of operating systems under the NIAP OSPP has significant implications for the use of cloud services by U.S. government agencies.  The Federal Risk and Authorization Management Program (FedRAMP) website states that it is a “government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services.” For both FedRamp Moderate and High, the SA-4 Guidance states “The use of Common Criteria (ISO/IEC 15408) evaluated products is strongly preferred.

FIPS 140-2 Level 1 Validation for Oracle Linux 6 and 7

In addition to the Common Criteria Certification, Oracle Linux cryptographic modules are also now FIPS 140-2 validated. FIPS 140-2 is a prerequisite for NIAP Common Criteria evaluations. “All cryptography in the TOE for which NIST provides validation testing of FIPS-approved and NIST-recommended cryptographic algorithms and their individual components must be NIST validated (CAVP and/or CMVP). At a minimum an appropriate NIST CAVP certificate is required before a NIAP CC Certificate will be awarded.” (See NIAP Policy Letter #5, June 25, 2018 at https://www.niap-ccevs.org/Documents_and_Guidance/ccevs/policy-ltr-5-update3.pdf )

FIPS is also a mandatory standard for all cryptographic modules used by the US government. “This standard is applicable to all Federal agencies that use cryptographic-based security systems to protect sensitive information in computer and telecommunication systems (including voice systems) as defined in Section 5131 of the Information Technology Management Reform Act of 1996, Public Law 104-106.” (See Cryptographic Module Validation Program; What Is The Applicability Of CMVP To The US Government? at https://csrc.nist.gov/projects/cryptographic-module-validation-program ).

Finally, FIPS is required for any cryptography that is a part of a FedRamp certified cloud service. “For data flows crossing the authorization boundary or anywhere else encryption      is required, FIPS 140 compliant/validated cryptography must be employed. FIPS 140 compliant/validated products will have certificate numbers. These certificate numbers will be required to be identified in the SSP as a demonstration of this capability. JAB TRs will not authorize a cloud service that does not have this capability.” (See FedRamp Tips & Cues Compilation, January 2018, at https://www.fedramp.gov/assets/resources/documents/FedRAMP_Tips_and_Cues.pdf ).

Oracle includes FIPS 140-2 Level 1 validated cryptography into Oracle Linux 6 and Oracle Linux 7 on x86-64 systems with the Unbreakable Enterprise Kernel and the Red Hat Compatible Kernel. The platforms used for FIPS 140 validation testing include Oracle Server X6-2 and Oracle Server X7-2, running Oracle Linux 6.9 and 7.3. Oracle “vendor affirms” that the FIPS validation is maintained on other x86-64 equivalent hardware that has been qualified in its Oracle Linux Hardware Certification List (HCL), on the corresponding Oracle Linux releases.

Oracle Linux cryptographic modules enable FIPS 140-compliant operations for key use cases such as data protection and integrity, remote administration (SSH, HTTPS TLS, SNMP, and IPSEC), cryptographic key generation, and key/certificate management.

Federal cloud customers who select Oracle Cloud Infrastructure can now opt for a NIAP CC-certified operating system (that also includes FIPS 140-2 validated cryptographic modules) by making Oracle Linux 7 the bedrock of their cloud services solution.

Oracle Linux is engineered for open cloud infrastructure. It delivers leading performance, scalability, reliability, and security for enterprise SaaS and PaaS workloads as well as traditional enterprise applications. Oracle Linux Support offers access to award-winning Oracle support resources and Linux support specialists, zero-downtime updates using Ksplice, additional management tools such as Oracle Enterprise Manager and lifetime support, all at a low cost.

For a matrix of Oracle security evaluations currently in progress as well as those completed, please refer to the Oracle Security Evaluations.

Visit Oracle Linux Security to learn how Oracle Linux can help keep your systems secure and improve the speed and stability of your operations.

 

Pages

Subscribe to Oracle FAQ aggregator