mysql – subquery must return onl one column in postgresql

I want to get the XML response after the stored procedure call in PostgreSQL. In the below query I was trying to get the output, but it throws an Error: subquery must return only one column. Can someone help me with this?

SELECT (
        SELECT xmlforest('2021_PPP_Parasiticide_Program' AS Name, CASE 
                    WHEN "temptable"."C_2021_PPP_PARA" IS NOT NULL
                        THEN 'True'
                    ELSE 'False'
                    END AS requirementsMe, coalesce("temptable"."YTD_2021_Qualifying_Carton_Purchased", 0) AS qualifyingPurchaseAmount, 'Dollar' AS qualifyingPurchaseAmountType, CASE 
                    WHEN "temptable"."C_2021_PPP_PARA" IS NOT NULL
                        THEN 'Active'
                    ELSE 'In-Active'
                    END AS componentStatus)
            ,(
                SELECT (
                        SELECT xmlforest('q1_parasiticide_carton_rebate' AS rewardName, CASE 
                                    WHEN q1_parasiticide_carton_rebate IS NULL
                                        THEN CAST(0 AS VARCHAR)
                                    ELSE cast(coalesce(q1_parasiticide_carton_rebate, 0) AS VARCHAR(10))
                                    END AS rewardAmount)
                        FROM temptable
                        )
                    ,(
                        SELECT xmlforest('q2_parasiticide_carton_rebate' AS rewardName, CASE 
                                    WHEN q2_parasiticide_carton_rebate IS NULL
                                        THEN CAST(0 AS VARCHAR)
                                    ELSE cast(coalesce(q2_parasiticide_carton_rebate, 0) AS VARCHAR(10))
                                    END AS rewardAmount)
                        FROM temptable
                        WHERE temptable.rus_id = temptable.rus_id
                        )
                FROM temptable
                )
        FROM temptable
        );

I want response in the below format –

<CustomerProgramStatus_response>
   <programComponents>
      <name>2020_PPP_Parasiticide_Program</name>
      <requirementsMet>FALSE</requirementsMet>
      <qualifyingPurchaseAmount>0</qualifyingPurchaseAmount>
      <qualifyingPurchaseAmountType>Dollar</qualifyingPurchaseAmountType>
      <componentStatus>In-Active</componentStatus>
      <componentRewards>
         <rewardName>Q1_Parasiticide_Carton_Rebate</rewardName>
         <rewardAmount>0</rewardAmount>
      </componentRewards>
      <componentRewards>
         <rewardName>Q2_Parasiticide_Carton_Rebate</rewardName>
         <rewardAmount>0</rewardAmount>
      </componentRewards>
   </programComponents>
</CustomerProgramStatus_response>
 

postgresql – Can I GRANT SELECT on all schemas in a database?

I need to create role users of which can only read (select) data.
I can do it for a particular schema.
But I need it to work with all existing and future schemas (existing at least). How can I do it?

I’ve tried

GRANT select ON DATABASE mzia TO qaread;

But it says that I can’t:

0LP01 invalid_grant_operation

postgresql – Query slowing down the database performance

I have a query to find the maximum spacing between charging stations on any given route, using PostGIS and pgRouting. An example query is below:

select (max(delr) * st_length(line::geography) * 0.000621371) as max_spacing
from (select sq2.ratio - coalesce((lag(sq2.ratio) over ( order by sq2.ratio)), 0) as delr, line
      from (select ST_LineLocatePoint(line, sqa.points) as ratio, line
            from sp_od_ferry(98282, 98002) as line,
                 (select st_setsrid(st_makepoint(longitude, latitude), 4326) as points
                  from (select longitude,
                               latitude
                        from evses_now
                        where analysis_id = 565
                          and (connector_code = 1
                           or connector_code = 3)
                        union
                        select longitude,
                            latitude
                        from zipcode_record
                        where zip = '98282'
                           or zip = '98002') as sq) as sqa
            where st_dwithin(line::geography, sqa.points::geography, 16093.4)
            order by ratio asc) as sq2) as sq3
group by sq3.line;

Briefly, the logic is to find the points (charging stations) near the shortest path (given by user-defined function sp_od_ferry()) between origin and destination and find the length of the longest segment between points.

I have to run the above query for several OD pairs, and several of these calculations can be launched in parallel by users. I used AWS RDS performance insights and it found the above query to be the slowest one and causing database slowdown (and 100% CPU usage on the DB instance).

enter image description here

On EXPLAIN ANALYZE, it shows the nested inner loop to be the costliest step. https://explain.dalibo.com/plan/jTf

enter image description here

I understand one way to reduce the database load would be to provision a bigger RDS instance. I currently use (db.t3.small) which has the following specs:

enter image description here

I used pgTune to make the changes to the default AWS RDS Postgres 12.5 settings. The new config is below:

max_connections = 100
shared_buffers = 512MB
effective_cache_size = 1536MB
maintenance_work_mem = 128MB
checkpoint_completion_target = 0.9
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 2621kB
min_wal_size = 2GB
max_wal_size = 8GB

Any suggestions regarding the query or ideas about how I can manage the database load while keeping the costs low are appreciated.

postgresql – How to trigger when an event crosses now()?

Is it somehow possible in postgres 12 to trigger an event when a table with an tsrange column crosses now(), or someone insert something that where the tsrange is prior to now()?

Triggers, as far i know, only triggers when someone insert or updates a row.

Triggering when someone insert a row which has passed now, seem to doable
With trigger that triggers on all inserts, but i am not sure how to trigger an event, when a given timestamp in that row crosses now..

And can both trigger scenarios be combined into one?

Stream intermediate parts of resultset in PostgreSQL

If i have an excecution plan in PostgreSQL that returns a large number of rows (e.g 10,000) and takes some time to complete, is it possible to stream chucks (or groups of rows) of the final result set before the execution plan finishes?

postgresql – Select list of attributes that exist in all groups

select distinct(p) from Pub;
/*
phdthesis
article
proceedings
incollection
inproceedings
www
book
mastersthesis
*/
select * from field limit 20;
/*
tr/meltdown/s18,0,author
tr/meltdown/s18,1,author
tr/meltdown/s18,2,author
tr/meltdown/s18,3,author
tr/meltdown/s18,4,author
tr/meltdown/s18,5,author
tr/meltdown/s18,6,author
tr/meltdown/s18,7,author
tr/meltdown/s18,8,author
tr/meltdown/s18,9,author
tr/meltdown/s18,10,title
tr/meltdown/s18,11,journal
tr/meltdown/s18,12,year
tr/meltdown/s18,13,ee
tr/meltdown/m18,0,author
tr/meltdown/m18,1,author
tr/meltdown/m18,2,author
tr/meltdown/m18,3,author
tr/meltdown/m18,4,author
tr/meltdown/m18,5,author
*/

pub and field join on column k (primary key)

I want to write a query that returns only those attributes (author/journal/year etc.) that have at least one record in every publication type: phdthesis, book, etc.

postgresql – Ignoring temp table in Postgres event trigger

I am trying to have a trigger that gets invoked when new tables, except temporary tables, are created.

This is what I have tried:

CREATE OR REPLACE FUNCTION insert()
RETURNS event_trigger
AS $$
DECLARE
    r RECORD;
BEGIN
    FOR r IN SELECT * FROM pg_event_trigger_ddl_commands() LOOP
        RAISE NOTICE 'caught % event on %', r.command_tag, r.object_identity;
    END LOOP;
 END;
 $$
 LANGUAGE plpgsql;

CREATE EVENT TRIGGER insert_event ON ddl_command_end
  WHEN TAG IN ('CREATE TABLE', 'CREATE TABLE AS', 'CREATE FUNCTION', 'ALTER TABLE', 'DROP TABLE')
 EXECUTE PROCEDURE insert();

create TEMP table my_table(id serial primary key);

This is the output I see:

CREATE TABLE
CREATE FUNCTION
CREATE EVENT TRIGGER
CREATE FUNCTION
CREATE EVENT TRIGGER
NOTICE:  caught CREATE SEQUENCE event on pg_temp.my_table_id_seq
NOTICE:  caught CREATE TABLE event on pg_temp.my_table
NOTICE:  caught CREATE INDEX event on pg_temp.my_table_pkey
NOTICE:  caught ALTER SEQUENCE event on pg_temp.my_table_id_seq

How do I exclude temporary tables from invoking the trigger?

postgresql – how could my patroni clusters split read and write queries in load balance?

I want to ask to split queries in load balancer level or on patroni servers because we can not fix or add connectionstring to in app.
In patroni, does pg_pool2 not work? Should i use two connectionstring for my app? can select queries not automatically go secondary?

node_id |    hostname     | port | status | lb_weight |  role  | select_cnt | load_balance_node | replication_delay | replication_state | replication_sync_state | last_status_change  
---------+-----------------+------+--------+-----------+--------+------------+-------------------+-------------------+-------------------+------------------------+---------------------
 0       | 192.168.118.138 | 5432 | up     | 0.400000  | master | 315        | false             | 0                 |                   |                        | 2021-04-18 15:27:05
 1       | 192.168.118.139 | 5432 | up     | 0.600000  | slave  | 0          | true              | 0                 |                   |                        | 2021-04-18 15:27:05
(2 rows)

How to represent a list of entities within a table of the same entity in PostgreSQL?

There’s a couple of ways you can go about this but the most relational and normalized way would be to create a second table called UserFriendList with the columns UserId and FriendUserId which would store one row per Friend for each User. This table would be one-to-many from User.Id to UserFriendList.UserId but would also be able to help bridge the join back to the User table on UserFriendList.FriendUserId to User.Id to get all the User attributes of the friends. This kind of table is known as a bridge / junction / linking table.

Example query with this design:

SELECT 
    User.Id AS UserId, User.FirstName AS UserFirstName, User.LastName AS UserLastName, 
    Friend.Id AS FriendUserId, Friend.FirstName AS FriendFirstName, Friend.LastName AS FriendLastName
FROM User
INNER JOIN UserFriendList
    ON User.Id = UserFriendList.UserId
INNER JOIN User AS Friend
    ON UserFriendList.FriendUserId = Friend.Id

Alternatively you can store the FriendList column directly on the User table as either a comma delimited list or in JSON, but these are both denormalized solutions, which will become harder to maintain changes, potentially lead to data redundancy, and will inflate the size of your User table which could make querying it less efficient.

postgresql – Postgres connection times out on LAN, but not WAN

SETUP

Using
Postgres 11,
pg-promise npm module

I have a local network of about 5 machines all running on a Class C 192.168.1.0/24 configuration.

My postgres is running on 192.168.1.A and is accessible by external connection through NAT and firewalling and my heroku API connects to postgres on 192.168.1.A perfectly.

On 192.168.1.B inside my network I have the dev version of the API running similarly to heroku, on a different machine.

The Problem

192.168.1.B one is persistently giving me a timeout error. The machines are in the same physical location. I’m not sure how to track down why I am getting this timeout error. The setup is very simple.

pg_hba.conf has these lines:

host    all             all             127.0.0.1/32            md5
host    all             all             0.0.0.0/0               md5
host    all             all             192.168.1.0/24          md5

Heroku log says:

2021-04-16T22:46:10.973041+00:00 heroku[router]: at=info method=GET path="/api/locations" host=<host> request_id=<id> fwd="<an ip>" dyno=web.1 connect=1ms service=735ms status=304 bytes=182 protocol=https

Express on the local dev machine says:

GET /api/locations 500 31647.158 ms - 37

I’m stumped as to why this would happen. The WAN connect works, but the LAN connect times out?