Skip to content

Oracle 2 Postgres cheatsheet

Oracle 2 Postgres cheatsheet

Comparative "How To" between Oracle and Postgres

Following text describes Oracle as it was before its pluggable era which started from 12th major version. While majority (if not all) of these examples will work as described there might be new features that will make some cases from not possible to already exists in Oracle. One of such examples is added OFFSET clause in SELECT which was absent in latest 11.2 version.

Postgres version described is mainly latest 11th range version which is 11.7.

In general, provided examples are not version specific and should work on both latest versions of Oracle and Postgres.

SQL

Basic SELECT

Oracle

Table specification is mandatory. Special dual table serves as placeholder for such cases.

SELECT 1 AS id FROM dual;

Postgres

Table specification is not needed. Hence no analogue of dual table.

SELECT 1 AS id;

NULL vs empty string

Oracle

Empty string '' is considered to be NULL

SELECT CASE
          WHEN '' IS NULL
          THEN 'null'
          ELSE 'not null'
       END        AS result
  FROM dual
;
-- result
-- --------
-- null

Postgres

Empty string '' is not considered to be NULL.

SELECT CASE
          WHEN '' IS NULL
          THEN 'null'
          ELSE 'not null'
       END        AS result
;
-- result
-- --------
-- not null

Postgres has BOOLEAN datatype (values: t - true, f - false) which allows more short query format.

SELECT '' IS NULL             AS result
     , pg_typeof('' IS NULL)  AS result_type
;
-- result | result_type
-- -------+-------------
-- f      | boolean

Concatenation with NULL

Oracle

Concatenation with NULL is safe. NULL value is just ignored

SELECT 'asdf' || NULL || 1234 AS result FROM dual;
-- result
-- --------
-- asdf1234

Same applies for '' empty string which is considered to be NULL

SELECT 'asdf' || '' || 1234 AS result FROM dual;
-- result
-- --------
-- asdf1234

Postgres

Concatenation with NULL makes result string also NULL. To avoid this CONCAT function should be used.

SELECT 'asdf' || NULL || 1234 AS result;
-- result
-- --------
--
SELECT CONCAT('asdf', NULL, 1234) AS result;
-- result
-- --------
-- asdf1234

Empty '' string doesn't considered to be NULL. Concatenation with it keeps value unchanged and thus could be omitted.

SELECT 'asdf' || '' || 1234 AS with_empty
     , 'asdf' || 1234       AS without_empty
;
--  with_empty | without_empty
-- ------------+---------------
--  asdf1234   | asdf1234

Same applies for CONCAT. Empty '' string could be omitted.

SELECT CONCAT('asdf', '', 1234) AS with_empty
     , CONCAT('asdf', 1234)     AS without_empty
;
--  with_empty | without_empty
-- ------------+---------------
--  asdf1234   | asdf1234

Generating rows

Oracle

Shortest form is hierarchial query with cutoff by LEVEL clause

SELECT LEVEL FROM dual CONNECT BY LEVEL <= 3;

Recursive subquery factoring form

WITH t(id) AS (
   SELECT 1         AS id
     FROM dual
    UNION ALL
   SELECT t.id + 1  AS id
     FROM t
    WHERE t.id < 3
)
SELECT *
  FROM t;

Postgres

Shortest form is select from generate_series function

SELECT * FROM generate_series(1, 3);

Recursive subquery factoring form

WITH RECURSIVE t AS (
   SELECT 1         AS id
    UNION ALL
   SELECT t.id + 1  AS id
     FROM t
    WHERE t.id < 3
)
SELECT *
  FROM t;

Rows generation for predefined set of values could also be done via VALUES clause or UNNEST function call

SELECT * FROM (VALUES (1), (2), (3)) t(id);
SELECT * FROM UNNEST('{1, 2, 3}'::INT[]);
SELECT UNNEST('{1, 2, 3}'::INT[]);

SYSDATE vs now()

Probably one of the most widely used object in Oracle alongside with dual table is SYSDATE function. It returns current date and time without fractional seconds part. Unfortunately Postgres doesn't have explicit analogue for it and therefore some trick is required.

Oracle

There are three main categories for date/time related datatypes: DATE, TIMESTAMP and INTERVAL. DATE datatype includes date and time portions of a datetime value. Mentioned SYSDATE function returns DATE datatype.

alter session set nls_date_format = 'dd.mm.yyyy hh24:mi:ss';
select sysdate from dual;
-- SYSDATE
-- 09.05.2020 21:35:33

Postgres

There are four main categories for date/time related datatypes: DATE, TIME, TIMESTAMP and INTERVAL. Because of separate TIME datatype existence there are separate functions present to get current date and time.

select current_date, current_time;
--  current_date |    current_time
-- --------------+--------------------
--  2020-05-09   | 21:38:19.892497+00

And there is no function exists which returns day and time portions as a single value of some date/time datatype. Closest candidates are current_timestamp or shorthand alternative now() function. They both return TIMESTAMP WITH TIME ZONE datatype.

select current_timestamp, now();
--        current_timestamp       |              now
-- -------------------------------+-------------------------------
--  2020-05-09 21:41:18.773393+00 | 2020-05-09 21:41:18.773393+00

In order to get value identical to Oracle's SYSDATE function we can cast returned timestamp value to drop off fractional seconds part.

select now()::timestamp(0);
--          now
-- ---------------------
--  2020-05-09 21:44:50

Another difference is that these functions return the timestamp at the start of current transaction. The closest candidate thus will be the clock_timestamp function which changes during each call even within single sql statement.

select clock_timestamp()::timestamp(0);
--    clock_timestamp
-- ---------------------
--  2020-05-09 21:50:56

select clock_timestamp() from generate_series(1, 10);
--         clock_timestamp
-- -------------------------------
--  2020-05-09 21:51:06.283102+00
--  2020-05-09 21:51:06.28311+00
--  2020-05-09 21:51:06.283113+00
--  2020-05-09 21:51:06.283114+00
--  ...

Finally, mentioned above timestamp returning functions in postgres are timezone sensitive. SYSDATE function is time zone independent. Setting required timezone before casting to TIMESTAMP(0) can be used during the call to clock_timestamp().

select (clock_timestamp() at time zone 'America/New_York')::timestamp(0);
      timezone
---------------------
 2020-05-09 21:28:07

MERGE

Oracle has standard MERGE command which allows either create or update record. Postgres doesn't have it. Merging functionality is done via INSERT ON CONFLICT DO UPDATE syntax of basic INSERT command.

Oracle

We are interested only in UPDATE part of MERGE because this is direct counterpart of ON CONFLICT DO UPDATE in Postgres.

Merge by unique key works as expected.

create table users
( name      varchar2(100)   not null
, surname   varchar2(100)
, birthdate date
, constraint users_uk unique (name)
);

insert into users (name, surname) values ('John', 'Doe');

select * from users;
-- NAME  SURNAME  BIRTHDATE
-- John  Doe

merge
 into users u
using ( select 'John' as name, date '2000-01-01' as birthdate from dual ) src
   on ( src.name = u.name )
 when matched then
    update
       set u.birthdate = src.birthdate
;

alter session set nls_date_format = 'yyyy-mm-dd';
select * from users;
-- NAME  SURNAME  BIRTHDATE
-- John  Doe      2000-01-01

Merge by arbitrary set of columns works as well. There is no requirement for primary or unique key existence on merge join condition. The only drawback on this approach is that join condition might be performed via users table full scan.

-- MERGE by name and surname. There is no PK or UK constraint defined by these two attributes
merge
 into users u
using ( select 'John' as name, 'Doe' as surname, date '2010-01-01' as birthdate from dual ) src
   on ( src.name = u.name and src.surname = u.surname )
 when matched then
    update
       set u.birthdate = src.birthdate
;

-- Birthdate is successfully updated to "2010-01-01"
select * from users;
-- NAME  SURNAME  BIRTHDATE
-- John  Doe      2010-01-01

Postgres

MERGE is implemented via INSERT ON CONFLICT DO UPDATE syntax. Basic INSERT is executed first. If row exists and either primary or unique/exclusion constraint is violated then ON CONFLICT DO UPDATE section is triggered.

Merge by unique key works as expected.

create table users
( name      varchar  not null
, surname   varchar
, birthdate date
, constraint users_uk unique (name)
);

insert into users (name, surname) values ('John', 'Doe');

select * from users;
--  name | surname | birthdate
-- ------+---------+------------
--  John | Doe     |

insert
  into users (name, birthdate)
values ('John', date '2000-01-01')
    on conflict (name) -- constraint name "users_uk"
    do update
   set birthdate = EXCLUDED.birthdate
;

select * from users;
--  name | surname | birthdate
-- ------+---------+------------
--  John | Doe     | 2000-01-01

So far so good. But the main problem is that ON CONFLICT assumes insertion conflict only either on primary key or unique constraint. This means that it's impossible to implement MERGE by arbitrary set of attributes as in example above for Oracle.

-- MERGE by name and surname. There is no PK or UK constraint defined by these two attributes
insert
  into users (name, surname, birthdate)
values ('John', 'Doe', date '2010-01-01')
    on conflict (name, surname) -- there is no such PK or UK constraint defined!
    do update
   set birthdate = EXCLUDED.birthdate
;

-- ERROR:  there is no unique or exclusion constraint matching the ON CONFLICT specification

For such case there is a trick is required. We can LEFT JOIN target merge table users to source data and define whether match is exist for given merge join condition. We select merge join attributes from target table and prefix them for example as tgt_*. Later on we analyze these attributes. If all of them are NULL this means no match and we should just INSERT If there is a match we should do UPDATE.

do $$
declare
   rec record;
begin
   for rec in
      select src.name         as name
           , src.surname      as surname
           , src.birthdate    as birthdate
           , tgt.name         as tgt_name
           , tgt.surname      as tgt_surname
        from ( values ('John', 'Doe', date '2010-01-01') )
          as src (name, surname, birthdate)
        left join
             users       as tgt
          on tgt.name    = src.name
         and tgt.surname = src.surname
   loop
      if rec.tgt_name is null and rec.tgt_surname is null
      then
         insert
           into users
         values (src.name, src.surname, src.birthdate);
      else
         update users      u
            set birthdate  = rec.birthdate
          where u.name     = rec.tgt_name
            and u.surname  = rec.tgt_surname;
      end if;
   end loop;
end;
$$;
-- DO

-- We successfully merged by name and surname and got birthdate update to "2010-01-01"
=> select * from users;
--  name | surname | birthdate
-- ------+---------+------------
--  John | Doe     | 2010-01-01

PL/SQL vs PL/pgSQL

Anonymous code block

Oracle

Consists of BEGIN and END clauses ended with ; symbol. Body couldn't be empty. To provide an empty body NULL; statement could be used

BEGIN
   NULL;
END;

If used interactively from client tool like sqlplus command ending symbol (defaults to /) must be specified as well

BEGIN
   NULL;
END;
/

Postgres

All code is provided as literal bounded by paired $$ brackets. It's possible to use named paired brackets such as $my_code_block$.

DO operator accepts any code block specified as literal and sends it to the server for execution

DO $$
BEGIN
   NULL;
END;
$$;

It's allowed to have empty body for any clauses like BEGIN/END pair

DO $$
BEGIN
END;
$$;

Message output

Oracle

System package DBMS_OUTPUT provides methods for outputing arbitrary string to STDOUT. If used interactively from client tool like sqlplus system setting SET SERVEROUTPUT ON should be set

BEGIN
   DBMS_OUTPUT.PUT_LINE('Hello World!');
END;
/
-- Hello World!

To make sure from within code that output is enabled for the current session there is ENABLE method present which turns output spooling on. Passing to it NULL value means unrestricted pooling size.

BEGIN
   DBMS_OUTPUT.ENABLE(NULL);
   DBMS_OUTPUT.PUT_LINE('Hello World!');
END;
/
-- Hello World!

Postgres

Messages of different levels could be triggered by RAISE command with default to be EXCEPTION.

DO $$
BEGIN
   RAISE INFO USING message = 'Hello World!';
END;
$$;
-- INFO:  Hello World!
-- DO

Levels hierarchy: DEBUG, LOG, INFO, NOTICE, WARNING, EXCEPTION. Special INFO level doesn't depend from client_min_messages runtime setting.

DO $$
BEGIN
   SET client_min_messages = WARNING;
   RAISE INFO    USING message = 'Hello World!';
   RAISE NOTICE  USING message = 'Hello World!';
   RAISE WARNING USING message = 'Hello World!';
END;
$$;
-- INFO:  Hello World!
-- WARNING:  Hello World!
-- DO

SELECT INTO vs INTO STRICT

Oracle

Single SELECT INTO mode. Executes successfully only if single record is selected. Generates NO_DATA_FOUND or TOO_MANY_ROWS exceptions if selected no rows or more than one row respectively.

DECLARE
   v_id INT;
BEGIN
   SELECT 1
     INTO v_id
     FROM dual
   ;
   DBMS_OUTPUT.PUT_LINE('Value: ' || v_id);
END;
-- Value: 1

Generates NO_DATA_FOUND in case of no rows selected

DECLARE
   v_id INT;
BEGIN
   SELECT 1
     INTO v_id
     FROM dual
    WHERE 1 = 0
   ;
   DBMS_OUTPUT.PUT_LINE('Value: ' || v_id);
END;
-- ORA-01403: no data found ORA-06512: at line 4
-- ORA-06512: at "SYS.DBMS_SQL", line 1721

Generates TOO_MANY_ROWS in case of more than one row selected

DECLARE
   v_id INT;
BEGIN
   SELECT 1
     INTO v_id
     FROM dual
  CONNECT BY LEVEL <= 2
   ;
   DBMS_OUTPUT.PUT_LINE('Value: ' || v_id);
END;
-- ORA-01422: exact fetch returns more than requested number of rows ORA-06512: at line 4
-- ORA-06512: at "SYS.DBMS_SQL", line 1721

Postgres

There are two modes: SELECT INTO and SELECT INTO STRICT. In order to have identical to Oracle behavior one must use SELECT INTO STRICT. In contrast SELECT INTO does not generate neither NO_DATA_FOUND nor TOO_MANY_ROWS exceptions.

In below examples COALESCE function returns first not null parameter from the provided ones

DO $$
DECLARE
   v_id INT;
BEGIN
   SELECT 1
     INTO v_id
   ;
   RAISE INFO USING message = CONCAT('Value: ', COALESCE(v_id, 0));
END;
$$;
-- INFO:  Value: 1
-- DO

In case of no rows SELECT INTO assigns target variable NULL and doesn't generate NO_DATA_FOUND exception.

DO $$
DECLARE
   v_id INT;
BEGIN
   SELECT 1
     INTO v_id
    WHERE 1 = 0
   ;
   RAISE INFO USING message = CONCAT('Value: ', COALESCE(v_id, 0));
END;
$$;
-- INFO:  Value: 0
-- DO

In case of many rows SELECT INTO pick-ups value from first row and discards the others. It doesn't generate TOO_MANY_ROWS exception

DO $$
DECLARE
   v_id INT;
BEGIN
   SELECT t.id
     INTO v_id
     FROM generate_series(1, 3) t(id)
    ORDER BY t.id DESC
   ;
   RAISE INFO USING message = CONCAT('Value: ', COALESCE(v_id, 0));
END;
$$;
-- INFO:  Value: 3
-- DO

Temporary tables

In Oracle temporary tables are persistent schema level objects similar to ordinary tables. They are created once and exist until explicitly dropped. Data in temporary tables are stored either for the duration of active transaction or session.

In contrast in Postgres they are pure temporary object in that manner that they are automatically dropped after session end. This imposes additional workaround that must be implemented in order to obtain in Postgres persistent temporary tables like in Oracle. In terms of data lifetime there is no difference. Postgres has the same two modes: transaction and session duration.

Working example of persistent temp table implementation could be found in this blog post.

BULK COLLECT

Oracle

This is native oracle PL/SQL language construct which allows in single operation to store multiple rows into some collection variable. The most oftenly used collection variable type for bulk collect is nested table. This is because it allows to store arbitrary number of elements and thus is more flexible comparing to for example varrays.

declare
   type nt is table of integer;
   v_ids nt;
begin
   select level
     bulk collect
     into v_ids
     from dual
  connect by level <= 10;

   dbms_output.put_line(v_ids.count);
end;
/
-- 10

Another approach to get collection from sql is to use COLLECT function. It requires mandatory wrapping into CAST call to defined on schema level collection type.

create type nt as table of integer;
-- Type created.

declare
   v_ids nt;
begin
   select cast(collect(level) as nt)
     into v_ids
     from dual
  connect by level <= 10;

   dbms_output.put_line(v_ids.count);
end;
/
-- 10

Postgres

There is no BULK COLLECT statement in Postgres and all such cases must be re-written. To achieve the same we can use array_agg function which accepts any non-array type and returns array of the argument type.

do $$
declare
   v_ids int[];
begin
   select array_agg(t.id)
     into v_ids
     from generate_series(1, 10) t(id)
   ;
   raise info using message = array_length(v_ids, 1);
end;
$$;
-- INFO:  10

This works well when you have just a few collections to store. But in case you have to select many attributes simultaneously it might be useful to consider usage of temporary table instead. In Postgres temporary table is automatically dropped at the end of current session. Which makes its data act similar to ordinary variables.

Instead of

do $$
declare
   v_ids    int[];
   v_names  varchar[];
   v_dt     timestamp[];
begin
   select array_agg(t.id)
        , array_agg(md5(t.id::text))
        , array_agg(clock_timestamp())
     into v_ids
        , v_names
        , v_dt
     from generate_series(1, 10) t(id)
   ;
   raise info using message = v_ids;
   raise info using message = v_names;
   raise info using message = v_dt;
end;
$$;

We can use

do $$
declare
   rec record;
begin
   drop table if exists tmp;

   create temp table tmp as
   select t.id                   as id
        , md5(t.id::text)        as name
        , clock_timestamp()      as dt
     from generate_series(1, 10) t(id)
   ;

   for rec in
      select tmp.* from tmp
   loop
      raise info using message = concat_ws(' - ', rec.id, rec.name, rec.dt);
   end loop;
end;
$$;

Temp table approach looks even more beneficial when during the processing of fetched collection variables you will need to change values of some of them. In such case having table in place you can apply full power of sql language to it and change data as needed. In case of collections you will have to use loops and array elements access notations.

Because temp tables stored until end of session you should explicitly drop them as soon as you finished working with them. This will help to avoid memory leakage in case of many temp tables within current session.

do $$
declare
   rec record;
begin
   drop table if exists tmp;

   create temp table tmp as
   select t.id                   as id
        , md5(t.id::text)        as name
        , clock_timestamp()      as dt
     from generate_series(1, 10) t(id)
   ;

   -- per row logic
   for rec in
      select tmp.* from tmp
   loop
      -- do the needed
      raise info using message = concat_ws(' - ', rec.id, rec.name, rec.dt);
   end loop;

   -- bulk logic
   update tmp
      set dt = tmp.dt + interval '1' day
    where tmp.name like 'John%'
   ;

   drop table if exists tmp;
exception
   when others then
      drop table if exists tmp;
end;
$$;

COMMIT

Transaction control statements are allowed in both PL/SQL and plpgsql with Postgres having more limited support of it.

Oracle

There is no difference between procedure or function in regards with COMMIT usage. Both types of routines could have it specified.

create or replace procedure p as
begin
    commit;
    dbms_output.put_line('procedure p called');
end;
/
create or replace function f return int as
begin
    commit;
    dbms_output.put_line('function f called');
    return 0;
end;
/

declare
    v int;
begin
    p();
    v := f();
end;
/
-- procedure p called
-- function f called

The only limitation is that functions having transaction control statement is not allowed to be used in SELECT until pragma AUTONOMOUS TRANSACTION is specified.

select f() from dual;
-- ORA-14552: cannot perform a DDL, commit or rollback inside a query or DML

-- create function with pragma for autonomous transaction inside
create or replace function f2 return int as
    pragma autonomous_transaction;
begin
    commit;
    dbms_output.put_line('function f2 called');
    return 0;
end;
/

select f2() from dual;
-- 0

Postgres

Transaction control statements are forbidden in functions. And this is a remarkable difference from Oracle

create or replace function f() returns void as
$$ begin commit; end; $$ language plpgsql;

=> select * from f();
-- ERROR:  invalid transaction termination
-- CONTEXT:  PL/pgSQL function f() line 1 at COMMIT

In order to have possibility to issue COMMIT it's required to use procedure instead. It also requires special command CALL to be executed.

create or replace procedure p() as
$$ begin commit; end; $$ language plpgsql;

=> call p();
-- CALL

Also you cannot select from procedure as you can from function which returns void. The only acceptable way of calling procedure is via CALL command as shown above.

=> select p();
-- ERROR:  p() is a procedure
-- LINE 1: select p();
--                ^
-- HINT:  To call a procedure, use CALL.

create or replace function f() returns void
as $$ begin null; end; $$ language plpgsql;

=> select f();
--  f
-- ---
-- (1 row)

Unfortunately this is not the only limitation. Even with procedures there is still no such level of flexibility as in Oracle. You can commit from procedure only if current call stack consists solely from procedure calls. If there is some function call in between then attempt to COMMIT or ROLLBACK fails.

create or replace procedure p2() as
$$ begin call p(); end; $$ language plpgsql;

create or replace function f2() returns void
as $$ begin call p(); end; $$ language plpgsql;

=> call p2();
-- CALL

=> select f2();
-- ERROR:  invalid transaction termination
-- CONTEXT:  PL/pgSQL function p() line 1 at COMMIT
-- SQL statement "CALL p()"
-- PL/pgSQL function f2() line 1 at CALL

This happens because functions are executed in subtransactions context and you cannot finish this context while being inside it.

In summary, Postgres started to support transaction control statements via new procedure object type but its support is still limited. In case of Oracle code with many nested calls and "random" usage of COMMIT it could be a problematic to achieve the same behavior in plpgsql. Some redesign might be needed.

General

Session monitoring

Oracle

Main entrypoint for sessions activity monitoring is v$session system view on single instance installation and gv$session on multi-instance RAC.

SELECT * FROM gv$session;

Postgres

Sessions activity is accessible through the pg_stat_activity system view.

SELECT * FROM pg_stat_activity;

User vs Schema

Oracle

In Oracle user is identical to schema entity. Creating user automatically creates schema with the same name as user which acts as a logical namespace for all objects owned by user.

Note: In 12th version there is already separation between common and local users that might be created in container database. These details aren't in scope of this post. In basic pluggable databases users' behavior didn't change and follow described above pattern.

CREATE USER tom IDENTIFIED BY password;

Postgres

In Postgres user is not tied to specific database and exists out of it's scope. User is defined within default database cluster. It could have access to different databases and within each database it could own many schemas. Thus, there is no requirement that schema name must be the same as user name.

CREATE USER tom;

Invalid object state

Oracle

Oracle allows object creation with syntax errors or non-resolved object names. Corresponding object obtain INVALID state which could be recompiled later

CREATE OR REPLACE FORCE VIEW v AS SELECT * FROM non_existent_table;
-- Errors: VIEW V
-- Line/Col: 0/0 ORA-00942: table or view does not exist

SELECT object_type, status FROM user_objects WHERE object_name = 'V';
-- OBJECT_TYPE STATUS
-- VIEW        INVALID

It is also possible to drop referenced by view or procedure object. In such case referencing object status switched to INVALID state.

CREATE TABLE t(id INT);
CREATE OR REPLACE VIEW v AS SELECT * FROM t;
DROP TABLE t;

SELECT object_type, status FROM user_objects WHERE object_name = 'V';
-- OBJECT_TYPE STATUS
-- VIEW        INVALID

Postgres

Postgres doesn't have INVALID state and will not allow such object creation. For statically parsed objects like views and sql functions it will raise error.

CREATE OR REPLACE VIEW v AS SELECT * FROM non_existent_table;
-- ERROR:  relation "non_existent_table" does not exist
-- LINE 1: CREATE OR REPLACE VIEW v AS SELECT * FROM non_existent_table...

CREATE OR REPLACE FUNCTION f() RETURNS void AS 'DELETE FROM non_existent_table' LANGUAGE sql;
-- ERROR:  relation "non_existent_table" does not exist
-- LINE 1: ...REPLACE FUNCTION f() RETURNS void AS 'DELETE FROM non_existe...
--                                                              ^
SELECT relname FROM pg_class WHERE relname IN ('v', 'p');
-- relname
-- ---------
-- (0 rows)

For dynamically parsed objects like plpgsql functions it considers function body as string literal which will be parsed dynamically. This allows to provide almost anything as function body. Error will happen during runtime.

CREATE OR REPLACE FUNCTION f2() RETURNS void AS 'BEGIN DELETE FROM non_existent_table; END;' LANGUAGE plpgsql;
-- CREATE FUNCTION

SELECT * FROM f2();
-- ERROR:  relation "non_existent_table" does not exist
-- LINE 1: DELETE FROM non_existent_table
--                     ^

Because Postgres doesn't have INVALID state it is not possible to drop referenced by view or procedure object. One can use CASCADE option to drop hierarchaly all referencing objects.

CREATE TABLE t(id INT);
CREATE OR REPLACE VIEW v AS SELECT * FROM t;
DROP TABLE t;
-- ERROR:  cannot drop table t because other objects depend on it
-- DETAIL:  view v depends on table t
-- HINT:  Use DROP ... CASCADE to drop the dependent objects too.

DROP TABLE t CASCADE;
-- NOTICE:  drop cascades to view v
-- DROP TABLE

SELECT relname FROM pg_class WHERE relname = 'v';
-- relname
-- ---------
-- (0 rows)

Views

Oracle

Oracle allows to drop column from the view, change its datatype or rename it via CREATE OR REPLACE command.

-- create table and view
SQL> create table t(id int, n varchar2(10));
Table created.
SQL> create view v as select * from t;
View created.

-- column drop is allowed
SQL> create or replace view v as select id from t;
View created.

-- column rename is allowed
SQL> create or replace view v as select * from t;
View created.
SQL> create or replace view v as select id, n as n2 from t;
View created.

-- column datatype change is allowed
SQL> create or replace view v as select * from t;
View created.
SQL> create or replace view v as select id, cast(n as integer) as n from t;
View created.

Postgres

Postgres doesn't allow to drop column from the view, change its datatype or rename it via standard CREATE OR REPLACE approach. The only way is to drop the view and create it back with required structure.

-- create table and view
dev=> create table t(id int, n text);
CREATE TABLE
dev=> create view v as select * from t;
CREATE VIEW

-- column drop is forbidden
dev=> create or replace view v as select id from t;
ERROR:  cannot drop columns from view

-- column rename is forbidden
dev=> create or replace view v as select id, n as n2 from t;
ERROR:  cannot change name of view column "n" to "n2"

-- column datatype change is forbidden
dev=> create or replace view v as select id, n::int from t;
ERROR:  cannot change data type of view column "n" from text to integer

Temporary Views

Oracle

Temporary views don't exist in Oracle

Postgres

Postgres allows to create temporary views. They behave the same like ordinary views with two major differences. - After session end they are automatically dropped. - Because they are really temporary objects they couldn't be created in specific schema. Special in-memory temp schema is used for that.

Assume we have table t upon which we create temp view v.

dev=> create table t(id int, n text);
CREATE TABLE
dev=> \d t
                   Table "app.t"
 Column |  Type   | Collation | Nullable | Default
--------+---------+-----------+----------+---------
 id     | integer |           |          |
 n      | text    |           |          |

dev=> create temp view v as select * from t;
CREATE VIEW
dev=> \d v
                View "pg_temp_4.v"
 Column |  Type   | Collation | Nullable | Default
--------+---------+-----------+----------+---------
 id     | integer |           |          |
 n      | text    |           |          |

dev=> select * from v;
 id | n
----+---
(0 rows)
After reconnect to the same database under the same user via \c - - command temporary view no longer exists
dev=> \c - -
You are now connected to database "dev" as user "app".
dev=> \d v
Did not find any relation named "v".
dev=> select * from v;
ERROR:  relation "v" does not exist
LINE 1: select * from v;
                      ^

Nice thing about temporary view is that having the same name with usual permanent table they will have more priority during object name resolution and thus could actually hide original object. Could be a handy thing during data mockup for testing.

Let's assume we have an ordinary table t in schema app.

dev=> create table t as select 1 id;
SELECT 1
dev=> \d t
                   Table "app.t"
 Column |  Type   | Collation | Nullable | Default
--------+---------+-----------+----------+---------
 id     | integer |           |          |

dev=> select * from t;
 id
----
  1

After creating temporary view with the same name t select from this table starts to return data from the view instead. It can be seen that view is created in a temporary schema pg_temp_3 which is stored completely in memory. If we will explicitly specify schema name app we will get data from original t table.

dev=> create temp view t as select 'Hi there!' as greeting;
CREATE VIEW
dev=> \d t
                View "pg_temp_3.t"
  Column  | Type | Collation | Nullable | Default
----------+------+-----------+----------+---------
 greeting | text |           |          |

dev=> select * from t;
 greeting
-----------
 Hi there!

dev=> select * from app.t;
 id
----
  1

Dropping view makes table t available back again as default one.

dev=> drop view t;
DROP VIEW
dev=> select * from t;
 id
----
  1

Synonym vs search_path

Oracle

First step in object name resolution is lookup in current user's schema. If object is non-prefixed then object is either local for current schema or it's a SYNONYM. Synonyms allow don't introduce hard dependencies for other schemas/databases objects in a code. Instead dependency is created for local SYNONYM object which could be re-pointed dynamically to different objects.

Create table under usr1 and issue SELECT grant to usr2

SQL> select user from dual;
-- USER
-- ----
-- USR1

SQL> create table t as select 1 id from dual;
Table created.

SQL> grant select on t to usr2;
Grant succeeded.

Connect as usr2 and create SYNONYM to usr1.t. Having that done we can reference remote object as if it was a local one.

SQL> conn usr2/usr2@pdb
Connected.

SQL> create synonym t for usr1.t;
Synonym created.

SQL> select * from t;
--      ID
-- -------
--       1

In the same time, if local object with the same name already exists then SYNONYM couldn't be created

SQL> create table t as select 2 id from dual;
Table created.

SQL> create synonym t for usr1.t;
create synonym t for usr1.t
               *
ERROR at line 1:
ORA-00955: name is already used by an existing object

Postgres

Object name resolution is done via search_path user profile setting. It's just a list of comma separated schema names which will be looked into for a specified object name. There is also no SYNONYM entity as it is. Default search path value is set to "$user", public. Where "$user" means schema with the same name as current user. Current user can be checked via select current_user query. Schema public is the default schema when no explicit schema name was provided or specified in search_path.

Create table in default schema and in public one.

=> \dn
  List of schemas
  Name  |  Owner
--------+----------
 app    | app
 public | postgres

=> select current_user;
 current_user
--------------
 app

=> show search_path;
   search_path
-----------------
 "$user", public

=> create table t as select 1 id;
SELECT 1
=> create table public.t as select 2 id;
SELECT 1

=> select * from t;
 id
----
  1
=> select * from public.t;
 id
----
  2

Let's swap with each other elements in search_path list. Non schema-prefixed name t will become resolved already to public schema.

=> set search_path to public, "$user";
SET
=> select * from t;
 id
----
  2

It can be seen that depending on search_path setting non schema-prefix name t is resolved to tables in different schemas.

=> set search_path to "$user", public;
SET
=> \d t
                   Table "app.t"
 Column |  Type   | Collation | Nullable | Default
--------+---------+-----------+----------+---------
 id     | integer |           |          |

=> set search_path to public, "$user";
SET
=> \d t
                 Table "public.t"
 Column |  Type   | Collation | Nullable | Default
--------+---------+-----------+----------+---------
 id     | integer |           |          |

Types conversion

Oracle is very flexible with implicit types conversion. In contrast, Postgres is quite strict in it. It allows some auto conversions as well but not for the same extent as Oracle does. This difference could lead to many runtime errors during code migration from Oracle to Postgres.

Oracle

It's possible to implicitly convert numeric data to character and vica verse.

select 1 + '2' as char2num, 10 || 'str' as num2char from dual;
-- CHAR2NUM   NUM2CHAR
-- 3          10str

You could also add number to date/timestamp and get date as result. Or concatenate date/timestamp with the string.

select 1 + sysdate as num2date, 'date: ' || sysdate as date2char from dual;
-- NUM2DATE    DATE2CHAR
-- 10-MAY-20   date: 09-MAY-20

select 1 + systimestamp as num2date, 'timestamp: ' || systimestamp date2char from dual;
-- NUM2DATE    DATE2CHAR
-- 10-MAY-20   timestamp: 09-MAY-20 06.45.54.187620 PM +00:00

Same rules apply to routines input parameters and return type

create or replace function f(p int) return int as begin return p + 1; end;

select f(1), f(1.0) from dual;
-- F(1) F(1.0)
-- 2    2

Postgres

It's also possible to implicitly convert datatype between some heterogeneous datatypes. Symmetrical numeric to varchar conversion works the same as in Oracle.

select 1 + '2' as char2num, 10 || 'str' as num2char;
--  char2num | num2char
-- ----------+----------
--         3 | 10str

Date type conversion also works the same.

select 1 + date '2020-05-09' as num2date, 'date: ' || date '2020-05-09' as date2char;
--   num2date  |    date2char
-- ------------+------------------
--  2020-05-10 | date: 2020-05-09

But for the timestamp datatype this works only partially. ```sql select 1 + current_timestamp as num2date; -- ERROR: operator does not exist: integer + timestamp with time zone -- LINE 1: select 1 + current_timestamp as num2date; -- ^ -- HINT: No operator matches the given name and argument types. You might need to add explicit type casts.

select 'timestamp: ' || current_timestamp as date2char; -- date2char


-- timestamp: 2020-05-09 19:02:01.245362+00

Much more problems with datatypes come with routines signatures.
In most cases Postgres requires _**explicit**_ match between provided value datatype and declared function parameter datatype.
This is another big problem in Oracle 2 Postgres migration.
```sql
create or replace function f(p int) returns int as 'select p + 1' language sql;
CREATE FUNCTION

select f(1);
 f
---
 2

-- such call will work in Oracle
select f(1.0);
ERROR:  function f(numeric) does not exist
LINE 1: select f(1.0);
               ^
HINT:  No function matches the given name and argument types. You might need to add explicit type casts.

Another subtle pure postgres specific is that sql functions are more flexible in datatypes conversion comparing to plpgsql ones.

create or replace function f() returns setof varchar as
   $$ select concat('asdf', 'qwer') $$ language sql;

create or replace function f2() returns setof varchar as
   $$ begin return query select concat('asdf', 'qwer'); end; $$ language plpgsql;

=> select * from f();
    f
----------
 asdfqwer

=> select * from f2();
ERROR:  structure of query does not match function result type
DETAIL:  Returned type text does not match expected type character varying in column 1.
CONTEXT:  PL/pgSQL function f2() line 1 at RETURN QUERY

Comments