PostgreSQL users and roles creation and privileges using PGPool

- Pentaho

PostgreSQL users and roles creation and privileges:

We assume that PostgreSQL Enterprise Manager was installed and configured for setting up new users and roles.

Select Postgres Enterprise Manager Node and right click Login Roles to create new roles:

1. To modify the properties of an existing login role, right click on the name of a login role in the tree control, and select Properties from the context menu. To delete a login role, right click on the name of the role, and select Delete/Drop from the context menu.

2. A role must be granted sufficient privileges before accessing, executing, or creating any database object. PEM allows you to assign ( GRANT ) and remove ( REVOKE ) object permissions to group roles or login accounts using the graphical interface of the PEM client.

Creating Role through PSQL:

1. Create a role that can log in, but don’t give it a password:

CREATE ROLE developer LOGIN;

2. Create a role with a password:

CREATE USER user1 WITH PASSWORD 'jw8s0F4';

(CREATE USER is the same as CREATE ROLE except that it implies LOGIN.)

3. Create a role with a password that is valid until the end of 2018. After one second has ticked in 2019, the password is no longer valid.

CREATE ROLE developer WITH LOGIN PASSWORD 'jw8s0F4' VALID UNTIL '2019-01-01';

4. Create a role that can create databases and manage roles:

CREATE ROLE admin WITH CREATEDB CREATEROLE;

Postgres using PGPool:

1. From now on, I assume that database clusters are located at /home/postgres/data and are owned by postgres user:

$ initdb -D /home/postgres/data

2. Next add followings to /home/postgres/data/postgresql.conf. “logging_collector” and below are not really relevant to Streaming replication but they are my favorites to make my life easier. You might want to remove “log_statement = ‘all'” in production environment however.

listen_address = '*'
hot_standby = on
wal_level = hot_standby
max_wal_senders = 1
logging_collector = on
log_filename = '%A.log'
log_line_prefix = '%p %t '
log_truncate_on_rotation = on
log_statement = 'all'

3. Put pg_hba.conf to /home/postgres/data. Of course you need to replace “/some/where/” with actual directory where you downloaded the scripts. Caution: settings here allows to access from any IP address. Please apply appropreate setting for your real world systems.

$ cp /etc/postgresql/pg_hba.conf" /home/postgres/data"

4. Start PostgreSQL server on server2 and server3. At this point, those PostgreSQL servers will run as primary server, thus no streaming replication is working.

$ pg_ctl -D /home/postgres/data start

5. Next you need to allow postgres user on server2 and server3 can access each other without password. Execute ssh-keygen command as postgres and append the contents of /home/postgres/.ssh/id_rsa.pub to /home/postgres/.ssh/authorized_keys of other server. After this we recommend to test the setting by executing ls command via ssh, for example.

$ ssh server2 ls

Installing pgPool II

1. Run the below commands in the server1 postgres server,

$ tar xfz /etc/postgresql/pgpool-II-3.3.3.tar.gz
$ cd pgpool-II-3.3.3
$ ./configure
$ make
$ sudo make install
$ cp /etc/postgresql//pgpool-II-3.3.3/install-functions.sh
$ sh install-functions.sh

2. Next you need to install pgpool-II configuration files onto server1. The main configuration file is pgpool.conf. The other one is the pcp.conf. You will need to execute followings as root.

$ cp /etc/postgresql/pgpool-II-3.3.3/pgpool.conf /usr/local/etc
$ chown apache /usr/local/etc/pgpool.conf
$ cp /etc/postgresql/pgpool-II-3.3.3/pcp.conf /usr/local/etc
$ chown apache /usr/local/etc/pcp.conf

3. Install basebackup.sh and pgpool_remote_start, neccessary for online recovery onto server2 and server3. Note that in pgpool_remote_start the path to pg_ctl command is specified. You might want to change it to an appropreate path according to your PostgreSQL installation.

$ cp /etc/postgresql/pgpool-II-3.3.3/baseback.sh /home/postgres/data
$ chmod 755 basebackup.sh
$ cp /etc/postgresql/pgpool-II-3.3.3/pgpool_remote_start /home/postgres/data
$ chmod 755 pgpool_remote_start

4. Install failover.sh for automatic failover onto server1.

$ sudo cp /etc/postgresql/pgpool-II-3.3.3/failover.sh /usr/local/etc
$ chmod 755 failover.sh

5. Create neccessary directories on server2 and server3. Execute followings as root.

$ mkdir /var/run/pgpool
$ chown apache /var/run/pgpool
$ mkdir /var/log/pgpool
$ chown apache /var/log/pgpool

$ mkdir /var/log/pgpool/trigger
$ chown postgres /var/log/pgpool/trigger

6. Add /var/www/.ssh/id_rsa.pub to /home/postggres/.ssh/authorized_keys on server2 and server3. After this we recommend to test the setting by executing ls command via ssh, for example do followings as apache user.

$ ssh postgres@server2 ls
$ ssh postgres@server3 ls

Installing pgpoolAdmin

$ cd /var/www/html/pgpoolAdmin-3.3.1
$ chmod 777 templates_c
$ chown apache conf/pgmgt.conf.php
$ chmod 644 conf/pgmgt.conf.php

1. Login to pgpoolAdmin and start pgpool-II from “pgpool status” menu. You see server2 port 5432 PostgreSQL is running as a primary server. You should be able to connect to server3 port 5432 by using psql. Let’s try to create a table.

$ createdb -h server1 test
$ psql -h server1 test
test=# create table t1(i int);
CREATE TABLE
test=#

2. Starting standby server.To start standby server, just click “Recovery” button. “basebackup.sh” will be executed and the standby server will be automatically started.

$ insert into t1 via pgpool-II.
-- it will be executed on primary server

psql -h server1 test
test=# insert into t1 values(1);
test=# \q

psql -h server3 test
-- now connected to standby server
test=# select * from t1;
i
---
1
(1 row)

3. If standby goes down, it is disconnected from pgpool-II. Users can issue SQL via pgpool-II as usual. Just streaming replication is stopped. To recover standby server, click “Recovery” button.

Postgres Backup and Archive:

Backing Up the Database,

1. Create a backup file using pg_dump.

$ cd /opt/edb/server1/bin
$ ./pg_dump -U enterprisedb -Fp -f /tmp/edb.dmp edb
Password:
$

2. Connect to the database as a superuser and export the SQL/Protect data using the export_sqlprotect(‘sqlprotect_file’) function where sqlprotect_file is the fully qualified path to a file where the SQL/Protect data is to be saved.

edb=# SELECT sqlprotect.export_sqlprotect('/tmp/sqlprotect.dmp');
export_sqlprotect
-------------------

(1 row)

The files /tmp/edb.dmp and /tmp/sqlprotect.dmp comprise your total database backup.

3. Restoring From the Backup Files,

The following example uses the psql utility program to restore the plain-text backup file /tmp/edb.dmp to a newly created database named newdb:

$ /opt/edb/as10/bin/psql -d newdb -U enterprisedb -f /tmp/edb.dmp
Password for user enterprisedb:
SET
SET
SET
SET
SET
COMMENT
CREATE SCHEMA
.
.
.

4. Connect to the new database as a superuser and delete all rows from the edb_sql_protect_rel table.This step removes any existing rows in the edb_sql_protect_rel table that were backed up from the original database. These rows do not contain the correct OIDs relative to the database where the backup file has been restored.

$ /opt/edb/as10/bin/psql -d newdb -U enterprisedb
Password for user enterprisedb:
psql.bin (10.0.1)
Type "help" for help.

newdb=# DELETE FROM sqlprotect.edb_sql_protect_rel;
DELETE 2

Delete all rows from the edb_sql_protect table.

newdb=# DELETE FROM sqlprotect.edb_sql_protect;
DELETE 1

Delete any statistics that may exist for the database.

newdb=# SELECT * FROM sqlprotect.edb_sql_protect_stats;
username | superusers | relations | commands | tautology | dml
--------------+--------------------+-----------+-------------------+--------------+-------
(0 rows)

5. For each row that appears in the preceding query, use the drop_stats function specifying the role name of the entry.

For example, if a row appeared with appuser in the username column, issue the following command to remove it:

newdb=# SELECT sqlprotect.drop_stats('appuser');
drop_stats
------------

(1 row)

6. Delete any offending queries that may exist for the database.

edb=# SELECT * FROM sqlprotect.edb_sql_protect_queries;
username | ip_address | port | machine_name | date_time | query
-----------------+---------------+------+-------------------------+----------------+-------
(0 rows)

7. For each row that appears in the preceding query, use the drop_queries function specifying the role name of the entry.

For example, if a row appeared with appuser in the username column, issue the following command to remove it:

edb=# SELECT sqlprotect.drop_queries('appuser');
drop_queries
--------------

(1 row)

8. Make sure the role names that were protected by SQL/Protect in the original database exist in the database server where the new database resides.

newdb=# SELECT sqlprotect.import_sqlprotect('/tmp/sqlprotect.dmp');
import_sqlprotect
-------------------
(1 row)

newdb=# SELECT datname, oid FROM pg_database;
datname | oid 
-----------+-------
template1 | 1
template0 | 13909
edb | 13917
newdb | 16679
(4 rows)

newdb=# SELECT rolname, oid FROM pg_roles;
rolname | oid 
--------------+-------
enterprisedb | 10
appuser | 16671
newuser | 16678
(3 rows)

newdb=# SELECT relname, oid FROM pg_class WHERE relname IN ('dept','emp','appuser_tab');
relname | oid 
-------------+-------
appuser_tab | 16803
dept | 16809
emp | 16812
(3 rows)

newdb=# SELECT * FROM sqlprotect.edb_sql_protect;
dbid | roleid | protect_relations | allow_utility_cmds | allow_tautology | allow_empty_dml
-------+--------+-------------------+--------------------+-----------------+-----------------
16679 | 16671 | t | t | f | f
(1 row)

newdb=# SELECT * FROM sqlprotect.edb_sql_protect_rel;
dbid | roleid | relid
-------+--------+-------
16679 | 16671 | 16809
16679 | 16671 | 16803
(2 rows)

newdb=# SELECT * FROM sqlprotect.edb_sql_protect_stats;
username | superusers | relations | commands | tautology | dml
----------+------------+-----------+----------+-----------+-----
appuser | 0 | 5 | 2 | 1 | 0
(1 row)

newedb=# \x
Expanded display is on.
nwedb=# SELECT * FROM sqlprotect.edb_sql_protect_queries;
-[ RECORD 1 ]+---------------------------------------------
username | appuser 
ip_address | 
port | 
machine_name | 
date_time | 20-JUN-14 13:21:00 -04:00 
query | CREATE TABLE appuser_tab_2 (f1 INTEGER); 
-[ RECORD 2 ]+---------------------------------------------
username | appuser 
ip_address | 
port | 
machine_name | 
date_time | 20-JUN-14 13:22:00 -04:00 
query | INSERT INTO appuser_tab_2 VALUES (2); 
-[ RECORD 3 ]+---------------------------------------------
username | appuser 
ip_address | 192.168.2.6 
port | 50098 
machine_name | 
date_time | 20-JUN-14 13:39:00 -04:00 
query | CREATE TABLE appuser_tab_3 (f1 INTEGER); 
-[ RECORD 4 ]+---------------------------------------------
username | appuser 
ip_address | 192.168.2.6 
port | 50098 
machine_name | 
date_time | 20-JUN-14 13:39:00 -04:00 
query | INSERT INTO appuser_tab_2 VALUES (1); 
-[ RECORD 5 ]+---------------------------------------------
username | appuser 
ip_address | 192.168.2.6 
port | 50098 
machine_name | 
date_time | 20-JUN-14 13:39:00 -04:00 
query | SELECT * FROM appuser_tab_2 WHERE 'x' = 'x';

Verify that the SQL/Protect configuration parameters are set as desired in the postgresql.conf file for the database server running the new database. Restart the database server or reload the configuration file as appropriate.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>