Postgres foreign data wrapper for MongoDB database
This is part 4 of #fdw foreign data wrappers series of posts.
Earlier we have introduced wrappers for mysql, oracle and sqlite databases.
In this post we will connect to MongoDB database from within Postgres.
Компанія Microsoft зробила ще один крок для об’єднання різних технологій і методологій роботи з даними під одним дахом. Додавання обробки JSON дало можливість працювати з даними в стилі NOSQL Document DB. Добавання роботи з графовою моделлю даних дасть можливість працювати в стилі NOSQL Graph DB. Звичайно, що під капотом залишився той самий движок реляційних баз даних, на який навішали новий функціонал.
Being a long term fan of SQLite database I have always been a little bit disoriented by its documentation structure.
For me it seems enough cluttered and not easy to follow.
Apart of documentation structure there is another thing which is out of line to what we expect from Primary Key constraint definition in other RDBMS.
If you ever were wondering what is the difference between INT PRIMARY KEY and INTEGER PRIMARY KEY and why AUTOINCREMENT is not needed for default primary key generation in SQLite then this post might be interested for you.
It makes an attempt to describe these things in hopefully more easier format.
Note: I don't say official documentation is bad. It describes all these things.
I just want to filter out Primary Key related info and provide it in more consistent manner accompained by some examples.
This is part 3 of #fdw foreign data wrappers series of posts.
Initial part 1 was about connectivity to MySQL database.
Next part 2 was related to accessing Oracle database.
In this post we will access SQLite database from within Postgres.
As a contractor of Epam company I'm currently working on Oracle 2 Postgres project migration into AWS cloud.
Being on tech team lead position I've been asked to present our work on official Epam's webinar as a speaker.
I'm thankful to the Epam for the given opportunity to share our experience with a wide auditory.
This is part 2 of #fdw foreign data wrappers series of posts.
In part 1 it was described what foreign data mean in Postgres as well as provided an example of connection to MySQL database.
In this post we will deal with accessing data in Oracle database.
This is the first part of #fdw foreign data wrappers series of posts.
One of the great features in Postgres is the ability to access data stored in outer sources as if they were stored in ordinary tables within the database.
There is no requirement to the source as is. For example, it can be another RDBMS such as Oracle or MySQL.
But we are not limited to relational databases only. It can be also just a csv file or even Hadoop file system objects.
All you need to do to get access to these data is to install corresponding foreing data wrapper extension.
Postgres allows automatic types conversion to some extent.
It supports fully automated implicit conversion, more constrained implicit conversion on assignment operation only and explicit conversion specified via CAST function or :: operator.
Common approach for running time consuming or maintenance tasks in databases is using job processes.
Two classic extensions that allow to launch scheduled jobs in Postgres are pgAgent and pg_cron.
Problem is that if you have database hosted in a cloud they might be unavailable. This happens for Postgres RDS in AWS cloud.
One of the unofficial recommendation from AWS team on support forum was to launch EC2 instance and trigger via crontab on that server psql which will execute some script or call an API on database side.
Such approach requires from you to keep (and pay for) separate EC2 instance. It has to be continiously up and running.
Another advise was even more complex setup with Lambda functions.
One of the problems that has to be resolved during Oracle 2 Postgres code migration is handling of temporary tables.
Biggest difference is that in Oracle temporary tables are persistent schema level objects.
You define the table structure once and it exists until you will explicitly drop it.
The only difference comparing to ordinary table is that data in temporary table are stored either during current transaction only or until end of current session.
In contrast in Postgres temp tables are pure temporary objects and got automatically dropped after current session end.
They also support data lifetime duration on the transaction or session level. But no temp table is available upon start of a new session.
In PL/SQL code you might have any SELECT or DML statement issued against temp table.
They will work because it will be present as a schema object.
In migrated plpgsql code such temp tables references will fail with relation "table_name" does not exist error.
We have to somehow create the table before executing any statement against it.