Featured

All things Database and VMware

Yvonne Murphy. Oracle Certified Master. VMware Certified Professional,  Member of the Office of the Chief Technical Ambassador VMware.

 

My role in VMware is to support customers running any Oracle product in a VM. But in all honestly Oracle do a great job of this so that doesn’t keep me busy.

There’s a lot of opportunities for a DBA to make herself useful in GSS though. I’m like a ninga. I go into those black boxes and I do some tricks in the database. Most of the blogs here are those scripts I use.

Do remember a bunny dies everytime you make a chance on one of these databases. so backup first!

recreate postgres instance is VCSA

Dump out VCDB the database to

/opt/vmware/vpostgres/current/bin/pg_dump -d VCDB -U postgres -Fp -c > /storage/updatemgr/VCDBackUp

Drop the database.

/opt/vmware/vpostgres/current/bin/dropdb -U postgres VCDB

dumpout the instance
su -m -c “/opt/vmware/vpostgres/current/bin/pg_dumpall -c -f /storage/updatemgr/GSS_dumpall.sql” postgres

Backup the data directory
cp /storage/db/vpostgres /storage/db/vpostgres_bk

Remove all dependencies including seat data

cd /storage/db/vpostgres
rm -rf *

rm /storage/seat/vpostgres/tasktblsp/*
rm /storage/seat/vpostgres/alarmtblsp/*
rm /storage/seat/vpostgres/hs2tblsp/*
rm /storage/seat/vpostgres/hs3tblsp/*
rm /storage/seat/vpostgres/hs1tblsp/*
rm /storage/seat/vpostgres/hs4tblsp/*
rm /storage/seat/vpostgres/eventtblsp/*

Create and start new instance

su – vpostgres -c “/opt/vmware/vpostgres/current/bin/initdb -D /storage/db/vpostgres”
su – vpostgres -c “/opt/vmware/vpostgres/current/bin/pg_ctl -D /storage/db/vpostgres start”

Create postgres user
/opt/vmware/vpostgres/current/bin/createuser -s postgres

Import the instance
/opt/vmware/vpostgres/current/bin/pg_dumpall -U postgres -c -f /storage/updatemgr/ddumps/GSS_dumpall.sql

Create VCDB and user.
/opt/vmware/vpostgres/current/bin/createdb -U postgres VCDB
Connect to the database:
/opt/vmware/vpostgres/current/bin/psql -U postgres VCDB
psql.bin (9.4.12 (VMware Postgres 9.4.12.0-5574246 release))
Type “help” for help.

Run the following queries::
create schema vc;
ALTER USER VC SET search_path to vc,pg_catalog;
ALTER USER postgres SET search_path to vc,pg_catalog;
ALTER USER vc WITH PASSWORD ‘d#Bu|AWL+x&{Y3j5’; — get this from /etc/vmware-vpx/vcdb.properties

ALTER DATABASE “VCDB” OWNER TO vc;

VCDB=> \q

We will now restore the previosuly backed up database:

/opt/vmware/vpostgres/current/bin/psql -d VCDB -U postgres -f /storage/updatemgr/VCDBackUp >/tmp/output_backup.txt 2>&1
Wait until it finishes.
Check the log on /tmp/output_backup.txt

Once it completes test by querying a table:

/opt/vmware/vpostgres/current/bin/psql -d VCDB -U vc
select * from vpx_entity; —confirm that rows are returned.

VCDB=> \q

Restore a backup of VCDB

/opt/vmware/vpostgres/current/bin/dropdb -U postgres VCDB

/opt/vmware/vpostgres/current/bin/createdb -U postgres VCDB

Connect to the database:
/opt/vmware/vpostgres/current/bin/psql -U postgres VCDB
psql.bin (9.4.12 (VMware Postgres 9.4.12.0-5574246 release))
Type “help” for help.

Run the following queries::
create schema vc;
ALTER USER VC SET search_path to vc,pg_catalog;
ALTER USER postgres SET search_path to vc,pg_catalog;
ALTER USER vc WITH PASSWORD ‘d#Bu|AWL+x&{Y3j5’; — get this from /etc/vmware-vpx/vcdb.properties

ALTER DATABASE “VCDB” OWNER TO vc;

VCDB=> \q

We will now restore the previosuly backed up database:

/opt/vmware/vpostgres/current/bin/psql -d VCDB -U postgres -f /storage/core/VCDBackUp >/tmp/output_backup.txt 2>&1
Wait until it finishes.
Check the log on /tmp/output_backup.txt

Once it completes test by querying a table:

/opt/vmware/vpostgres/current/bin/psql -d VCDB -U vc
select * from vpx_entity; —confirm that rows are returned.

VCDB=> \q

Nightly Vacuum analyze

You’ll have to create a file on your appliance and run it.

The appliation needs to be up for this script to be up.

It will cause extra IO but no locking. If your using synchronize please do this off peak.

copy the script below into vi

vi vacuum_nightly.sh

chmod u+x vacuum_nightly.sh

./vacuum_nightly.sh

Let me know if there’s any issue

ymurphy@vmware.com

After running this manaually a few times then I’d suggest you run it nightly on a cron.

crontab -e

20 2 * * * /tmp/vacuum_nightly.sh

Output from the script can be seen on stats_collector.log  on the same directory as the script

#!/bin/bash
error=0
errorlog='stats_collector.log'
pbin=`ps -ea -o cmd | grep '[p]ostgres -D' | awk '{ gsub ("postgres$","",$1); print $1}'`
run_log='optimise_output.log'
dbownername=` ps -eo uname:20,pid,pcpu,pmem,sz,tty,stat,time,cmd --sort -time |grep -m 1 'postgres:' |awk '{print $1}'`
databasename=` ps -f -N --sort -pid | grep -m 1 'postgres:' |awk '{print $10}'`

function vacuumaz {

echo $dbownername database owner
echo $databasename database name

if [ $databasename = "secureall" ]; then
 $pbin/vacuumdb -U secureall -d secureall 2>>$errorlog
elif [ $databasename = "vcac" ]; then
 $pbin/vacuumdb -U postgres -d vcac 2>>$errorlog
elif [ $databasename = "VCDB" ]; then
 $pbin/vacuumdb -U postgres -d VCDB > /dev/null 2>>$errorlog
elif
 $pbin/vacuumdb -U postgres -d $databasename > /dev/null 2>>$errorlog ; then
 printf '\n No Errors on vacuumdb \n'
 printf '\n'
 else
 echo 'vacuumdb ERROR. Please review $errorlog file'
 printf '\n'
 error=1
 fi

echo `/bin/date "+%Y-%m-%dT%H:%M:%S"` Finish vacuumdb
echo `/bin/date "+%Y-%m-%dT%H:%M:%S"` Finish vacuumdb >> $errorlog
}

# Main Program

vacuumaz

 

 

Is my postgres database bloated?

I personally believe that autovacuum isn’t agressive enough on any of our appliances out of the box. This causes bloat and slower response times.

Bloat can be seen using this script.

If the tbloat or ibloat is > 20% then this indicates that autovacuum isn’t doing its job and you need to implement a nightly vacuum. See next blog entry

SELECT
current_database(), schemaname, tablename, /*reltuples::bigint, relpages::bigint, otta,*/
ROUND((CASE WHEN otta=0 THEN 0.0 ELSE sml.relpages::FLOAT/otta END)::NUMERIC,1) AS tbloat,
CASE WHEN relpages < otta THEN 0 ELSE bs*(sml.relpages-otta)::BIGINT END AS wastedbytes,
iname, /*ituples::bigint, ipages::bigint, iotta,*/
ROUND((CASE WHEN iotta=0 OR ipages=0 THEN 0.0 ELSE ipages::FLOAT/iotta END)::NUMERIC,1) AS ibloat,
CASE WHEN ipages < iotta THEN 0 ELSE bs*(ipages-iotta) END AS wastedibytes
FROM (
SELECT
schemaname, tablename, cc.reltuples, cc.relpages, bs,
CEIL((cc.reltuples*((datahdr+ma-
(CASE WHEN datahdr%ma=0 THEN ma ELSE datahdr%ma END))+nullhdr2+4))/(bs-20::FLOAT)) AS otta,
COALESCE(c2.relname,’?’) AS iname, COALESCE(c2.reltuples,0) AS ituples, COALESCE(c2.relpages,0) AS ipages,
COALESCE(CEIL((c2.reltuples*(datahdr-12))/(bs-20::FLOAT)),0) AS iotta — very rough approximation, assumes all cols
FROM (
SELECT
ma,bs,schemaname,tablename,
(datawidth+(hdr+ma-(CASE WHEN hdr%ma=0 THEN ma ELSE hdr%ma END)))::NUMERIC AS datahdr,
(maxfracsum*(nullhdr+ma-(CASE WHEN nullhdr%ma=0 THEN ma ELSE nullhdr%ma END))) AS nullhdr2
FROM (
SELECT
schemaname, tablename, hdr, ma, bs,
SUM((1-null_frac)*avg_width) AS datawidth,
MAX(null_frac) AS maxfracsum,
hdr+(
SELECT 1+COUNT(*)/8
FROM pg_stats s2
WHERE null_frac<>0 AND s2.schemaname = s.schemaname AND s2.tablename = s.tablename
) AS nullhdr
FROM pg_stats s, (
SELECT
(SELECT current_setting(‘block_size’)::NUMERIC) AS bs,
CASE WHEN SUBSTRING(v,12,3) IN (‘8.0′,’8.1′,’8.2’) THEN 27 ELSE 23 END AS hdr,
CASE WHEN v ~ ‘mingw32’ THEN 8 ELSE 4 END AS ma
FROM (SELECT version() AS v) AS foo
) AS constants
GROUP BY 1,2,3,4,5
) AS foo
) AS rs
JOIN pg_class cc ON cc.relname = rs.tablename
JOIN pg_namespace nn ON cc.relnamespace = nn.oid AND nn.nspname = rs.schemaname AND nn.nspname <> ‘information_schema’
LEFT JOIN pg_index i ON indrelid = cc.oid
LEFT JOIN pg_class c2 ON c2.oid = i.indexrelid
) AS sml
ORDER BY wastedbytes DESC;

 

 

 

Removing duplicate rows for all databases

Day to day we have to delete duplicate data from the database.

An upgrade requires  Primary key where one didn’t exist before or a db let it guard down and didn’t enforce the PK’s (yeah looking at you postgres!)

Or if an upgrade can’t create a new pk then this is useful. The SQL Server commands to find this are there too.

Continue reading “Removing duplicate rows for all databases”