linux – pg_restore: error: could not read from input file (DBeaver for PostGreSQL in Oracle Virtual Machine), for .tar file

Using DBeaver21.0.3 for PostGreSQL13.2, on Oracle Virtual Machine on Oracle VirtualBox with Oracle linux7.9, in MacOS Catalina: (Did not install PgAdmin):

Trying to restore dvdrental.tar with following Restore Settings:

Format: custom
Backup File: /home/oracle/Downloads/dvdrental.tar
local client: /usr/bin

Getting error:

pg_restore: error: could not read from input file

restore – pg_restore using PostGIS/PostgreSQL Dump Changes Data in Geography Column

Two PostGIS databases (PostGIS 3.0, PostgreSQL 13.1) were setup on two separate machines to be as close to each other as possible using Docker images.

A dump of the database was taken from the first machine using

pg_dump --host=db1.foo.com --dbname=foo --username=postgres -Fc --file=/tmp/foo.dump

and then restored on the database on the second machine using

pg_restore --clean --dbname=foo /tmp/foo.dump

When we view a query result using a GUI software TablePlus, we noticed that the column named coordinates of type Geography contains values that look very different after restoring.

Query Result on 1st Machine:

SELECT coordinates FROM locations LIMIT 5;

enter image description here

Query Result on 2nd Machine (after pg_restore):

SELECT coordinates FROM locations LIMIT 5;

enter image description here

However, our app that queries this database for coordinate data appears to be plotting the data correctly on a map.

Question: Why did the Geography data values in the column coordinates changed, and how can we restore from the dump while keeping the original data values?


Update: Tried using -b when performing pg_dump, but the problem persist.

pg_dump --host=db1.foo.com --dbname=foo --username=postgres -Fc -b --file=/tmp/foo.dump

postgresql – Does pg_restore continue after internet connection got lost

I know a query runs until it returns data when connection to the client is lost. However, what if you use pg_restore. Does it upload the restore file and then does the restore on the server? Or does it restore step by step requiring the client to stay connected.

I want to know this because my employee uses a vpn that disconnects automatically after some time. And the restore takes longer that the timeout XD.

When the connection got lost I got the pg_restore: error: error returned by PQputCopyData: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. error.

However on the server I can see the process is still up and running using SELECT * FROM pg_stat_activity;

postgresql – pg_restore –clean not working because cascade drop

I’m working with a copy (DB-B) of a database (DB-A) I keep up to date by running daily:

pg_restore -U postgres -d someDB --clean -j 2 -L WhatToImportList admin.dump

But I started noticing duplicate records in DB-B tables. Upon further investigation seems the --clean is not dropping the tables in DB-B because it would require a cascade drop on other tables and views that are in my DB-B but NOT in the origin DB-A.

  1. Is it possible to force the import of data WITHOUT doing a cascade drop? I want to keep all my custom tables, views and functions!
  2. If not, what would be a good duplication strategy where I import the data from DB-A to DB-B but keep all my functions, views and tables I need for my analysis in DB-B?

Thanks.

Edit: possible work around: truncating each table and then importing them… but i’d have to include in the script EACH TABLE.

postgresql – Is it safe to pg_dump and pg_restore a new postgres database that has malware?

I’m pretty sure there is a crypto bot eating up my CPU through a postgres script. I would like to create an entirely new VM, and move my database with it using pg_dump and pg_restore. I already checked my postgres for new users, tables, databases; couldn’t find anything odd there which could comprise me if I move my data. I’m a little worried however because the bot is some how getting access to my postgres, and nothing else on my VM.

Thank you for the help.

postgresql – pg_restore showing errors when specifying schema when backing up with pg_dump

I have created two different dump files one without specifying the schema, another specifying the public schema

without specifying the public schema’

pg_dump -h IP_ADDRESS -p 5432 -U my_user -Fc my_db  > my_db_allschema.dump

and the pg_dump statement when specifying the public schema

pg_dump -h IP_ADDRESS -p 5432 -U my_user -Fc my_db -n public > my_db_publicschema.dump

When using pg_restore to restore the dump files, I get errors with the dump file that was generated when specifying the public schema.

postgres@debian:~$ pg_restore -h localhost -p 5432  -U my_user -d my_db my_db_publicschemaonly.dump
pg_restore: while PROCESSING TOC:
pg_restore: from TOC entry 8; 2615 2200 SCHEMA public postgres
pg_restore: error: could not execute query: ERROR:  schema "public" already exists
Command was: CREATE SCHEMA public;


pg_restore: from TOC entry 212; 1259 18102 TABLE abandoned_url real_estate
pg_restore: error: could not execute query: ERROR:  function public.gen_random_uuid() does not exist
LINE 2:     id uuid DEFAULT public.gen_random_uuid() NOT NULL
                            ^
HINT:  No function matches the given name and argument types. You might need to add explicit type casts.
Command was: CREATE TABLE public.abandoned_url (
    id uuid DEFAULT public.gen_random_uuid() NOT NULL
);

Looking at this statement that throws an error

CREATE TABLE public.abandoned_url (
        id uuid DEFAULT public.gen_random_uuid() NOT NULL
    );

The reason it’s throwing an error is because pg_dump has put public before gen_random_uuid(), the following statement works fine when removing public before gen_random_uuid()

CREATE TABLE public.abandoned_url (
        id uuid DEFAULT gen_random_uuid() NOT NULL
    );

Am I creating the dump file incorrectly? Could this be a bug in pg_dump?

postgresql – pg_restore: error: could not execute query: ERROR: schema “public” already exists

When I try to restore a database using

pg_restore -U username --no-owner --no-privileges -d mydb backupfile.dump

I get an error saying:

pg_restore: error: could not execute query: ERROR:  schema "public" already exists
Command was: CREATE SCHEMA public;

How do I prevent this error? I looked at another similar question on the site, and tried what was suggested in the answer.

I used dropdb mydb and createdb mydb before running the pg_restore command.

In case it is relevant, I am trying to restore the database dumped from PostgreSQL 10.14 into a database in PostgreSQL 12.3.

Thank you.

postgresql – Stop pg_restore from returning false errors?

How do you stop pg_restore from returning an error code if it encounters a meaningless non-error like schema "public" already exists?

I’m trying to automate a database transfer, and after I upgrae PostgreSQL to 12, now pg_restore throws an error condition for things that aren’t actually errors.

For example, my database setup script is basically:

sudo psql --user=postgres --no-password --host=localhost   --command="DROP DATABASE IF EXISTS mydb;"
sudo psql --user=postgres --no-password --host=localhost   --command="CREATE DATABASE mydb;"
sudo psql --user=postgres --no-password --host=localhost --dbname mydb  --command="DROP USER IF EXISTS myuser; CREATE USER myuser WITH PASSWORD 'supersecretpassword';"
sudo psql --user=postgres --no-password --host=localhost --dbname mydb  --command="GRANT ALL PRIVILEGES ON DATABASE mydb to myuser;"
sudo pg_restore -U postgres --format=c --create --dbname=mydb /tmp/mydb_snapshot.sql.gz

However, even though the last line logically succeeds in loading the database snapshot, it returns an error code, reporting that:

 pg_restore: error: could not execute query: ERROR:  database "mydb" already exists

It seems a change in Pg12 is that creation of a new database also now populates it with a default public schema, which conflicts with the public schema in any snapshot. I never ran into this issue in Pg11 or 10.

I could modify my script to ignore stderr, but then I’d be blind to actual problems.

How do I tell pg_restore to ignore this error, but still report other errors?

postgresql – pg_restore: error: unable to read from input file: end of file

I am trying to create a dump of a table, which is stored inside my database.
The table is called

webapp_product

In order to create the dump, I use the following commands:

docker-compose exec db1 pg_dump -F tar -d db -U user -t webapp_product > /home/backup.tar  

Then on the other machine:

sudo mv backup.tar ../pgdata
docker-compose exec db1 sh
pg_restore -Ft -C -U user -d db /var/lib/postgresql/data/pgdata/backup.tar

Unfortunately, this always results in the same scenario. I receive the following error log:

pg_restore: error: could not read from input file: end of file

I also tried the other way around. After moving the backup file, I tried to use:

cat /var/lib/postgresql/data/pgdata/backup.sql | psql -d db -U user

After invoking this command, I received an error saying that I could not create duplicates (the IDs of the records were the same.)
In order to go further, I tried to disable all the constraints, but without any expected effect.

@ Update1

The backup.tar file consists of the following lines:

-- NOTE:
--
-- File paths need to be edited. Search for $$PATH$$ and
-- replace it with the path to the directory containing
-- the extracted data files.
--
--
-- PostgreSQL database dump
--

-- Dumped from database version 12.1 (Debian 12.1-1.pgdg100+1)
-- Dumped by pg_dump version 12.1 (Debian 12.1-1.pgdg100+1)

Can it affect the data restore process?

and has a following end:



--
-- PostgreSQL database dump complete
--


postgresql – How to use pg_restore in a database with different postgis installations (postgis schema location)?

I have database1 where postgis is installed in the public schema.
And database2 where postgis is located in a schema called postgis.

When I empty the database database1.schema1, schema1.table1 refers to its geom column as public.geometry.

Therefore, pg_restore returns the error "The public schema does not exist"
because schema1.table1 has a column public.geometry and the schema does not exist, nor postgis is installed in this schema.

How to create a pg_dump without having geometry columns qualified schemas?

Database2 search_path points to the postgis schema.
I'm using PostgresSQL 10.