mysql – Two left joins not working

I have tried doing to join two speler but it says not unique value..

SELECT speler.Roepnaam as team1, wedstrijd.Speler2ID as team2
FROM wedstrijd 
LEFT JOIN speler 
ON wedstrijd.Speler1ID = speler.ID 
LEFT JOIN speler 
ON wedstrijd.Speler2ID = speler.ID WHERE speler.ID

sqlite – Offloading database joins to IOT devices

Solution as it is right now

I have this solution where I gather information from a proprietary product of a different company in various sites. The solution is based on a single go binary that contains everything needed to run the application (even an embedded sqlite) and is deployed to Windows computers running the proprietary software. All those Windows computers are connected to the Internet and sit behind a firewall that allows all outgoing traffic but incoming traffic is blocked and the owners of the computers don’t have the knowledge to configure their firewalls (to be honest for most of them it is already a demanding task to install the go program)

Users can access the data that is stored in a sqlite database using their mobile device or a different computer (1). The components needed for this are installed on a server that provides its services over the Internet. I created REST webservices (2) that send graphql queries to the computers on the respective site via RPC over NATS (3). The go program installed on the site computers (4) runs those queries against the local sqlite (5) and sends the result back to the NATS queue (6) (7). The result is taken from the NATS queue and returned to the caller by the same REST service that processed the incoming call (8)

enter image description here

Improvement I’m looking for

This setup works fine when I query single sites. But I should also be able to query several sites in parallel and retrieve a single “recordset”.

Here’s a made up example:

Lets assume there is a Persons table available on each site. I can query that table by running SELECT SiteNumber, PersonName FROM Persons

I need to run that query on for example 3 sites and merge/join them into one result that would look like this:

2, Daisy
2, Eve
2, Adam
5, Bob
7, Alice

The SQLs I need to run are much more complicated than this, I would need to do GROUP BY and `ORDER BY´ for example. This excludes approaches where I would for example create three maps and join them into one.

So far I intentionally don’t store or accumulate data on the server. Which are my options to postprocess the data ? I would rather not INSERT all subresults into a temporary table on the server. I found no distributed database that can be embedded into go and works across firewall borders.

Convert SQL Select to a Magento getCollection statement; Multiple dates and joins

I don't know Magento and I need to translate a SQL Select query into a getCollection statement equivalent to Magento. Thanks for the help

    SELECT
    ordertot.entity_id as order_id,
    `state`,
    `status`,
    `increment_id`,
    `customer_email`,
    `customer_firstname`,
    `customer_lastname`,
    ordertot.customer_note,
    ordertot.created_at,
    payment.method,
    saddress.company,
    saddress.city as suburb,
    saddress.region as state,
    saddress.telephone,
    item.item_id,
    item.sku,
    item.name as description,
    round(item.qty_ordered,0) as qty_ordered ,
    round(item.base_row_total_incl_tax,2) as price_incl_item,
    round(shipping_incl_tax,2) as shipping_incl
FROM `posninen_mage`.`sales_flat_order` as ordertot
INNER JOIN `posninen_mage`.`sales_flat_order_payment`as payment
ON ordertot.entity_id = payment.parent_id
INNER JOIN `posninen_mage`.`sales_flat_order_address` as saddress
ON ordertot.entity_id = saddress.parent_id
INNER JOIN `posninen_mage`.`sales_flat_order_item` as item
ON ordertot.entity_id = item.order_id
WHERE ordertot.created_at >= (current_date - interval 5 day) 
AND saddress.address_type = 'shipping'

jsonpath – JmesPath joins nested array elements

I realize there are several other JmesPath join issues here, but I am having problems with a separate issue for which I have not found any examples, where I need to concatenate (that is, join) a set of JSON values ​​that have dynamically-named keys in a single element.

If I start with the following JSON data structure:

{
  "data": (
  {
    "secmeetingdays":
    {
      "dayset_01":
      {
        "day_01": "M",
        "day_02": "W",
        "day_03": "F"
      },
      "dayset_02":
      {
        "day_01": "T",
        "day_02": "TH"
      }
    },
  })
}

I would like to end with something like this:

(
  { 
    "M,W,F"
  },
  {
    "T,TH"
  }
)

I launched the query to flatten the data, but I'm completely stuck with the join syntax. Nothing I try seems to work.

  1. Attempt 1: data().secmeetingdays | (0).*.*
(
  (
    "M",
    "W",
    "F"
  ),
  (
    "T",
    "TH"
  )
)

Almost, but not quite there.

  1. Attempt 2: data().secmeetingdays | (0).*.* | {join(',',@)}

failed

  1. Attempt 3: data().secmeetingdays | (0).*.*.join(',',@)

failed

  1. Attempt 4: data().secmeetingdays | {join(',',@(0).*.*)}

failed

  1. I tried to avoid 2 flat areas to have a reference to enter inside the join.

Attempt 4 data().secmeetingdays | (0).* | join(',',@()).

failed

  1. Attemp 6 data().secmeetingdays | (0).*.* | @.join(',',()) Give a result, but it is not what I want:
"M,W,F,T,TH"

The example here https://jmespath.org/ has a join, but it's only on one list of items. How can I join the sub-tables without joining the parents?

SQL Server – Optimizing a query to retrieve records at random with multiple joins and filters

I have the following diagram:

DB diagram

This question was also published in StackOverflow, but I also want to consult with specialists more focused on database administration due to the nature of my project. Sorry if this is a mistake

Right now, the table Property hold more than 70,000 records. I am developing an update to support more than 500 concurrent sessions. The app will support an a card to perform the searches, which is why GeoLocation declared Coordinate as geography Data type. We now have a big problem, because the response time for some (most important) requests is very slow. I mean, the app should return around 1000 records at a time if there is this amount of results for the specified parameters.

Parameters are distributed across all tables in the schema (in fact, this is part of the schema). Being Features a table showing all the main "characteristics" of the properties (number of bedrooms, number of garages, etc.).

With that in mind, the query that is taking so long right now is:

DECLARE @cols NVARCHAR(MAX), @query NVARCHAR(MAX);

DECLARE @properties TABLE(
    (ID) INT
)

INSERT INTO @properties
    SELECT p.(Id)
    FROM(Property) p
    INNER JOIN(GeoLocation) AS(g) 
        ON(p).(Id) = (g).(PropertyId)
    INNER JOIN(PropertyFeature) AS(pf) 
        ON(pf).(PropertyId) = (p).(Id)
    INNER JOIN(Feature) AS(f) 
        ON(pf).(FeatureId) = (f).(Id)
    WHERE(g).(Address) IS NOT NULL AND(((g).(Address) <> N'') OR(g).(Address) IS NULL)
        AND(pf).(FeatureId) IN(
            Select ID from feature where featuretype = 1)
    GROUP BY p.Id, p.ModificationDate
    ORDER BY (p).ModificationDate DESC, newid()
    OFFSET 0 ROWS
    FETCH NEXT 1000 ROWS ONLY

DECLARE @features TABLE(
    (Name) NVARCHAR(80)
)

INSERT INTO @features
    select Name from feature where FeatureType = 1

CREATE TABLE #temptable
(
    Id INT,
    Url NVARCHAR(200),
    Title NVARCHAR(300),
    Address NVARCHAR(200),
    Domain Tinyint,
    Price Real,
    Image NVARCHAR(150), 
    Name NVARCHAR(80),
    Value NVARCHAR(150)
)

INSERT INTO #temptable
SELECT
    (t).(Id), 
    (t).(Url), 
    (t).(GeneratedTitle) AS(Title), 
    (t).(Address), 
    (t).(Domain), 
    (t).(Price),
    (SELECT TOP(1) ISNULL((m).(Resize1200x1200), (m).Resize730x532)
     FROM (Multimedia) AS(m)
     WHERE (t).(Id) = (m).(PropertyId)
        and m.MultimediaType = 1
     ORDER BY (m).(Order)) AS(Image), 
    (t).(Name), 
    (t).(Value)
FROM
    (SELECT
        (p).(Id),
        (p).(Url),
        (p).(GeneratedTitle),
        (g).(Address),
        (p).(Domain),
        (pr).(Amount) AS Price,
        (p).(ModificationDate),
        (f).(Name),
        (pf).(Value)
    FROM (Property) AS (p)
    INNER JOIN (GeoLocation) AS(g) 
        ON (p).(Id) = (g).(PropertyId)
    INNER JOIN (PropertyFeature) AS(pf) 
        ON (pf).(PropertyId) = (p).(Id)
    INNER JOIN (Feature) AS(f) 
        ON (pf).(FeatureId) = (f).(Id)
    INNER JOIN (Operation) AS (o) 
        ON (p).(Id) = (o).(PropertyId) 
    INNER JOIN (OperationType) AS (o0) 
        ON (o).(OperationTypeId) = (o0).(Id) 
    INNER JOIN (Price) AS (pr) 
        ON (pr).(OperationId) = (o).(Id) 
    WHERE p.Id in 
        (Select Id from @properties)
    GROUP BY (p).(Id), 
             (p).(Url),
             (p).(GeneratedTitle), 
             (g).(Address),
             (p).(Domain), 
             (pr).(Amount),
             (p).(ModificationDate),
             (f).(Name),
             (pf).(Value)) AS(t)
    ORDER BY(t).(ModificationDate) DESC

SET @cols = STUFF(
                (
                    SELECT DISTINCT
                            ','+QUOTENAME(c.(Name))
                    FROM @features c FOR XML PATH(''), TYPE
                 ).value('.', 'nvarchar(max)'), 1, 1, '');
SET @query = 'SELECT (Id), 
                     (Url), 
                     (Title), 
                     (Address), 
                     (Domain), 
                     (Price), 
                     (Image), 
                     ' + @cols + '
               FROM (SELECT (Id), 
                            (Url), 
                            (Title), 
                            (Address), 
                            (Domain), 
                            (Price), 
                            (Image), 
                            (Value) AS (value), 
                            (Name) AS(name) 
                     FROM #temptable)x 
                     PIVOT(max(value) for name in ('+@cols+')) p';
EXECUTE(@query);

DROP TABLE #temptable

The execution plan and live query statistics show me the following:

Query execution plan

The previous query attempts to randomly obtain an X number of record IDs, retaining all of the filter criteria to get only the IDs of the records that meet those criteria. The time is currently up to 15 seconds. This is a lot if we are talking about more than 400 users simultaneously using the application.

Please, I need your help with this. I've been trying to solve this problem for three weeks without success, but a lot of progress has been made (before it takes 2 minutes on average).

If it helps, I can give you access to a deployed "dummy" version of the database with the same amount of records to test and see the problem directly.

Thanks in advance …

=================================================== =================================================== =
INDEX:

the indexes that are currently on the tables are:

GO
CREATE UNIQUE NONCLUSTERED INDEX IX_Property_ModificationDate 
ON (dbo).(Property) (ModificationDate DESC) 
WITH( SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, FILLFACTOR = 90, ONLINE = ON)

GO
CREATE NONCLUSTERED INDEX (IX_Property_ParentId_StatusCode) 
ON (dbo).(Property) ((ParentId) ASC, (StatusCode) ASC)
WITH( SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, FILLFACTOR = 90, ONLINE = ON);

GO
CREATE NONCLUSTERED INDEX (IX_Property_ParentId_StatusCode_Id_ModificationDate) 
ON (dbo).(Property) ((ParentId) ASC, (StatusCode) ASC, (Id) ASC, (ModificationDate) ASC)
WITH( SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, FILLFACTOR = 90, ONLINE = ON);

GO
CREATE NONCLUSTERED INDEX (IX_Property_ParentId)
    ON (dbo).(Property)((ParentId) ASC)
    WITH( SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, FILLFACTOR = 90, ONLINE = ON);

    GO
CREATE NONCLUSTERED INDEX (IX_Property_Identity_Domain_StatusCode)
    ON (dbo).(Property)((Identity) ASC, (Domain) ASC, (StatusCode) ASC)
    WITH( SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, FILLFACTOR = 90, ONLINE = ON);

GO
CREATE NONCLUSTERED INDEX (IX_Property_Id_ModificationDate) 
ON (dbo).(Property) (Id ASC, ModificationDate ASC)
WITH( SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, FILLFACTOR = 90, ONLINE = ON);

GO
CREATE NONCLUSTERED INDEX (IX_Property_PublisherId)
    ON (dbo).(Property)((PublisherId) ASC)
    WITH( SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, FILLFACTOR = 90, ONLINE = ON);


GO
CREATE NONCLUSTERED INDEX (IX_Property_RealEstateTypeId)
    ON (dbo).(Property)((RealEstateTypeId) ASC)
    WITH( SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, FILLFACTOR = 90, ONLINE = ON)


GO

CREATE INDEX FIX_Property_StatusCode_Online ON (dbo).(Property)(StatusCode) WHERE StatusCode = 1
WITH( SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, FILLFACTOR = 90, ONLINE = ON)
GO

CREATE INDEX FIX_Property_StatusCode_Offline ON (dbo).(Property)(StatusCode) WHERE StatusCode = 0
WITH( SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, FILLFACTOR = 90, ONLINE = ON)
GO

CREATE INDEX FIX_Property_Domain_Urbania ON (dbo).(Property)(Domain) WHERE Domain = 1
WITH( SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, FILLFACTOR = 90, ONLINE = ON)
GO

CREATE INDEX FIX_Property_Domain_Adondevivir ON (dbo).(Property)(Domain) WHERE Domain = 2
WITH( SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, FILLFACTOR = 90, ONLINE = ON)
GO

GO
CREATE NONCLUSTERED INDEX (IX_GeoLocation_PropertyId_ModificationDate) 
ON (dbo).(GeoLocation) (PropertyId ASC, (ModificationDate) DESC)
WITH( SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, FILLFACTOR = 90, ONLINE = ON);

GO
CREATE NONCLUSTERED INDEX (IX_GeoLocation_PropertyId_Address) 
ON (dbo).(GeoLocation) (PropertyId ASC, (Address) ASC)
WITH( SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, FILLFACTOR = 90, ONLINE = ON);

GO
CREATE UNIQUE NONCLUSTERED INDEX IX_GeoLocation_ModificationDate 
ON (dbo).(GeoLocation) (ModificationDate DESC) 
WITH( SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, FILLFACTOR = 90, ONLINE = ON)
GO

CREATE NONCLUSTERED INDEX (IX_GeoLocation_Ubigeo)
ON (dbo).(GeoLocation)((Ubigeo) ASC)
WITH( SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, FILLFACTOR = 90, ONLINE = ON)

GO
CREATE UNIQUE NONCLUSTERED INDEX (IX_GeoLocation_PropertyId)
    ON (dbo).(GeoLocation)((PropertyId) ASC)
    WITH( SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, FILLFACTOR = 90, ONLINE = ON)
GO

CREATE SPATIAL INDEX SIX_GeoLocation_Coordinate ON (dbo).(GeoLocation)(Coordinate)
GO

CREATE INDEX FIX_GeoLocation_Domain_Urbania ON (dbo).(GeoLocation)(Domain) WHERE Domain = 1
WITH( SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, FILLFACTOR = 90, ONLINE = ON)
GO

CREATE INDEX FIX_GeoLocation_Domain_Adondevivir ON (dbo).(GeoLocation)(Domain) WHERE Domain = 2
WITH( SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, FILLFACTOR = 90, ONLINE = ON)
GO

GO
CREATE NONCLUSTERED INDEX (IX_Multimedia_PropertyId_Order) 
ON (dbo).(Multimedia) (PropertyId ASC, (Order) ASC)
WITH( SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, FILLFACTOR = 90, ONLINE = ON);

GO
CREATE NONCLUSTERED INDEX (IX_Multimedia_PropertyId)
    ON (dbo).(Multimedia)((PropertyId) ASC)
    WITH( SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, FILLFACTOR = 90, ONLINE = ON);

GO
CREATE NONCLUSTERED INDEX (IX_Multimedia_Order)
    ON (dbo).(Multimedia)((Order) ASC)
    WITH( SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, FILLFACTOR = 90, ONLINE = ON);
GO

CREATE NONCLUSTERED INDEX (PK_Multimedia_Property)
    ON (dbo).(Multimedia)((Id) ASC, (PropertyId) ASC);
GO

CREATE INDEX FIX_Multimedia_MultimediaType_Image ON (dbo).(Multimedia)(MultimediaType) WHERE MultimediaType = 1
WITH( SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, FILLFACTOR = 90, ONLINE = ON)
GO

GO
CREATE NONCLUSTERED INDEX (IX_PropertyFeature_PropertyId_FeatureId) 
ON (dbo).(PropertyFeature) (PropertyId ASC, (FeatureId) ASC)
WITH( SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, FILLFACTOR = 90, ONLINE = ON);

GO
CREATE NONCLUSTERED INDEX (IX_PropertyFeature_FeatureId)
    ON (dbo).(PropertyFeature)((FeatureId) ASC);


GO
CREATE NONCLUSTERED INDEX (IX_PropertyFeature_PropertyId)
    ON (dbo).(PropertyFeature)((PropertyId) ASC);


GO
CREATE NONCLUSTERED INDEX (IX_PropertyFeature-FeatureId)
    ON (dbo).(PropertyFeature)((Id) ASC, (FeatureId) ASC);


GO
CREATE NONCLUSTERED INDEX (IX_PropertyFeature_Property)
    ON (dbo).(PropertyFeature)((Id) ASC, (PropertyId) ASC);

GO
CREATE NONCLUSTERED INDEX (IX_Operation_PropertyId)
    ON (dbo).(Operation)((PropertyId) ASC);

GO
CREATE NONCLUSTERED INDEX (IX_Operation_OperationTypeId)
    ON (dbo).(Operation)((OperationTypeId) ASC);

GO
CREATE NONCLUSTERED INDEX (IX_Price_OperationId)
    ON (dbo).(Price)((OperationId) ASC);

GO
CREATE NONCLUSTERED INDEX (IX_Price_Operation)
    ON (dbo).(Price)((Id) ASC, (OperationId) ASC);

magento2.3 – Magento 2 joins 2 personalized tables in a personalized module

First of all, thank you for reading my question. I am learning to program in M2 so forgive me for any strange questions. So far, I have managed to create a custom module and an edit page. My main database table is called dsssolutions_management_vendor. My form loads this as expected, but now I want to load additional tables to fill out my form as well. For this I would like to follow the coding standards as well as possible.

As of this moment, I'm not sure and I can't find online which is the right approach to load additional database tables by default into my form and maybe also easily save this data later, but let's start with the join to understand correctly. For the form, I use ui_component.

Personally, I would expect that I would have to somehow modify the data provider / provider so that the getData () function loads the data by default. Below is my current code for this file. Note that I changed the get id with (& # 39; general & # 39;) because I use tabs, without M2 not filling out the edit form.

collection = $collectionFactory->create();
    $this->dataPersistor = $dataPersistor;
    parent::__construct($name, $primaryFieldName, $requestFieldName, $meta, $data);
}

/**
 * Get data
 *
 * @return array
 */
public function getData()
{
    if (isset($this->loadedData)) {
        return $this->loadedData;
    }
    $items = $this->collection->getItems();
    foreach ($items as $model) {
        $this->loadedData($model->getId())('general') = $model->getData();
    }
    $data = $this->dataPersistor->get('dsssolutions_management_vendor');

    if (!empty($data)) {
        $model = $this->collection->getNewEmptyItem();
        $model->setData($data);
        $this->loadedData($model->getId()) = $model->getData();
        $this->dataPersistor->clear('dsssolutions_management_vendor');
    }

    return $this->loadedData;
}
}

The database I want to join is called dsssolutions_management_vendoraddress (I know that the potential naming convention should be that of the provider, but I can change it later). In this table, I have a vendor_id column which should match the ID of the vendor record that I am retrieving. For now one to one but later maybe one to many.

Another potential location to do this may be the seller / edit.php. To be sure also this code below.

resultPageFactory = $resultPageFactory;
    parent::__construct($context, $coreRegistry);
}

/**
 * Edit action
 *
 * @return MagentoFrameworkControllerResultInterface
 */
public function execute()
{
    // 1. Get ID and create model
    $id = $this->getRequest()->getParam('vendor_id');
    $model = $this->_objectManager->create(DssSolutionsManagementModelVendor::class);

    // 2. Initial checking
    if ($id) {
        $model->load($id);
        if (!$model->getId()) {
            $this->messageManager->addErrorMessage(__('This Vendor no longer exists.'));
            /** @var MagentoBackendModelViewResultRedirect $resultRedirect */
            $resultRedirect = $this->resultRedirectFactory->create();
            return $resultRedirect->setPath('*/*/');
        }
    }
    $this->_coreRegistry->register('dsssolutions_management_vendor', $model);

    // 3. Build edit form
    /** @var MagentoBackendModelViewResultPage $resultPage */
    $resultPage = $this->resultPageFactory->create();
    $this->initPage($resultPage)->addBreadcrumb(
        $id ? __('Edit Vendor') : __('New Vendor'),
        $id ? __('Edit Vendor') : __('New Vendor')
    );
    $resultPage->getConfig()->getTitle()->prepend(__('Vendors'));
    $resultPage->getConfig()->getTitle()->prepend($model->getId() ? __('Edit Vendor %1', $model->getVendorName()) : __('New Vendor'));
    return $resultPage;
}
}

I hope someone can point me in the right direction here.

IMPORTANT | ChimpanzeeHost joins the battle against the Corona virus | Folding @ Home team 258618 | NewProxyLists

During difficult times, all of humanity is currently going through, in ChimpanzeeHost, we decided to dedicate all of our servers not currently used in the battle against the Corona virus (SARS-CoV-2) using software (protected by e -mail). can donate our computing power for use in various medical projects. Please join our team to help him in this growing effort. Thank you. Stay safe.

(MySQL Aggregate, JOINS and GROUP BY function) Sakila database project

The current pandemic makes it very difficult to contact my teachers for help, so I hope I can describe my problem well enough to try to get help here.

The database used is the sakila example database
(I tried to create a violin, but the tables are just too big)

My data will be grouped by store (only 2 video stores), and I will receive

  1. Total sales of each store (teacher index: the store table has inventory. The inventory can be rented, the rental table has payments.
  2. number of customers per store
  3. inventory by store
  4. number of films per store
  5. number of films rented that have not yet been returned

Here is my incorrect output:

--------------------------------------------------------------------------------------------------------------------------------
| Jordan_Rasmussen | store_id | total_sales | num_customers | count_inventory | count_titles | inventory_cost | num_rentals_out |
--------------------------------------------------------------------------------------------------------------------------------
| (date&time)      | 1        | 68359569.18 | 326           | 2270            | 759          | 952923.30      | 29992 |
----------------------------------------------------------------------------------------------------------------------------
| (date&time)      | 2        | 56966647.92 | 273           | 2311            | 762          | 970134.69      | 24843 |              |
--------------------------------------------------------------------------------------------------------------------------------

Here is the correct output:

--------------------------------------------------------------------------------------------------------------------------------
| First_and_last N | store_id | total_sales | num_customers | count_inventory | count_titles | inventory_cost | num_rentals_out |
--------------------------------------------------------------------------------------------------------------------------------
| (date&time)      | 1        | 209691.93   | 326           | 2270            | 759          | 46205.30       | 92              |
--------------------------------------------------------------------------------------------------------------------------------
| (date&time)      | 2        | 208669.04   | 273           | 2311            | 762          | 46415.89       | 91              |
--------------------------------------------------------------------------------------------------------------------------------

Here is my code to try to get the above output:

SELECT NOW() AS 'Jordan_Rasmussen',
s.store_id,

-- Get the total sales
SUM(p.amount) AS total_sales,

-- Get the number of customers
COUNT(DISTINCT c.customer_id) AS num_customers,

-- Get the inventory count
COUNT(DISTINCT i.inventory_id) AS inventory_count,

-- Get the number of movie titles
COUNT(DISTINCT f.title) AS num_titles,

-- Get the inventory value 
SUM(DISTINCT f.replacement_cost) * COUNT(DISTINCT i.inventory_id)   AS inventory_value,

-- Get the number of movies rented that have not yet been returned 
COUNT(r.rental_date) AS num_rentals_out 

FROM store AS s 
LEFT JOIN inventory AS i ON s.store_id = i.store_id 
LEFT JOIN customer AS c ON s.store_id = c.store_id 
INNER JOIN rental AS r ON i.inventory_id = r.inventory_id
INNER JOIN payment AS p ON r.rental_id = p.rental_id
INNER JOIN film AS f ON i.film_id = f.film_id 


GROUP BY store_id;

I can get the correct results by themselves, but I notice that the more I join tables together, the more the results change. I've been scratching my head for a while now, but I'm just not sure what I'm missing.

Should I use subqueries? Or am I just joining incorrectly?

Sorry for such a big question, but I don't know anything right now.

Mysql optimizes slow execution queries with many to many table joins

I have the following query with joins to a number of many-to-many junction tables:

profile_language
profile_industry
profile_contract_type
profile_contract_hour
profile_qualification

Execution takes approximately 3 to 4 seconds. When I try the same by excluding the many-to-many junction tables, the query is executed in less than 0.4 seconds.

select distinct `profiles`.*
      , `locations`.`name` as `location_name`
       , `candidate_view`.`last_viewed`
         , CASE WHEN candidate_shortlist.profile_id IS NOT NULL THEN true ELSE false END AS shortlisted
     , CASE WHEN unlocked_profiles.profile_id IS NOT NULL THEN true ELSE false END AS unlocked 
from `profiles` 
inner join `jobseekers` on `jobseekers`.`id` = `profiles`.`jobseeker_id`
inner join `locations` on `locations`.`id` = `profiles`.`location_id` 
inner join `profile_language` on `profile_language`.`profile_id` = `profiles`.`id` 
inner join `profile_industry` on `profile_industry`.`profile_id` = `profiles`.`id` 
left join `profile_contract_type` on `profile_contract_type`.`profile_id` = `profiles`.`id` 
left join `profile_contract_hour` on `profile_contract_hour`.`profile_id` = `profiles`.`id` 
left join `profile_qualification` on `profile_qualification`.`profile_id` = `profiles`.`id` 
left join (SELECT MAX(created_at) AS last_viewed
                , profile_id
             FROM candidate_views
            WHERE recruiter_id = 43 
             GROUP BY profile_id ) AS candidate_view on `candidate_view`.`profile_id` = `profiles`.`id` 
left JOIN (SELECT order_items.purchaseable_id as profile_id
             FROM orders
       INNER JOIN order_items on order_items.order_id = orders.id
       INNER JOIN recruiters on recruiters.id = orders.recruiter_id
            WHERE recruiters.company_id = 37
              AND order_items.purchaseable_type = "App\Profile" ) AS unlocked_profiles on `unlocked_profiles`.`profile_id` = `profiles`.`id` 
left join `candidate_shortlist` on `candidate_shortlist`.`profile_id` = `profiles`.`id` and `candidate_shortlist`.`recruiter_id` = 43 
    where `profiles`.`searchable` = 1 
      and `profiles`.`deleted_at` is NULL 
 order by `profiles`.`id` desc limit 25 offset 0

This is the explanatory information:

+----+-------------+-----------------------+------------+--------+-------------------------------------------------------------------+-------------+---------+----------------------------------+------+----------+----------------------------------------------+
| id | select_type | table                 | partitions | type   | possible_keys                                                     | key         | key_len | ref                              | rows | filtered | Extra                                        |
+----+-------------+-----------------------+------------+--------+-------------------------------------------------------------------+-------------+---------+----------------------------------+------+----------+----------------------------------------------+
|  1 | PRIMARY     | profiles              | NULL       | ALL    | PRIMARY,profiles_jobseeker_id_unique,profiles_location_id_foreign | NULL        | NULL    | NULL                             | 2826 |     1.00 | Using where; Using temporary; Using filesort |
|  1 | PRIMARY     | jobseekers            | NULL       | eq_ref | PRIMARY                                                           | PRIMARY     | 4       | testjobsdb.profiles.jobseeker_id |    1 |   100.00 | Using index                                  |
|  1 | PRIMARY     | locations             | NULL       | eq_ref | PRIMARY                                                           | PRIMARY     | 4       | testjobsdb.profiles.location_id  |    1 |   100.00 | NULL                                         |
|  1 | PRIMARY     | profile_contract_type | NULL       | ref    | PRIMARY                                                           | PRIMARY     | 4       | testjobsdb.profiles.id           |    1 |   100.00 | Using index                                  |
|  1 | PRIMARY     | profile_contract_hour | NULL       | ref    | PRIMARY                                                           | PRIMARY     | 4       | testjobsdb.profiles.id           |    1 |   100.00 | Using index                                  |
|  1 | PRIMARY     | profile_qualification | NULL       | ref    | PRIMARY                                                           | PRIMARY     | 4       | testjobsdb.profiles.id           |    1 |   100.00 | Using index                                  |
|  1 | PRIMARY     |             | NULL       | ref    |                                                        |  | 4       | testjobsdb.profiles.id           |    2 |   100.00 | NULL                                         |
|  1 | PRIMARY     | profile_language      | NULL       | ref    | PRIMARY                                                           | PRIMARY     | 4       | testjobsdb.profiles.id           |    2 |   100.00 | Using index                                  |
|  1 | PRIMARY     | order_items           | NULL       | ALL    | order_items_order_id_foreign                                      | NULL        | NULL    | NULL                             |    9 |   100.00 | Using where                                  |
|  1 | PRIMARY     | orders                | NULL       | eq_ref | PRIMARY,orders_recruiter_id_foreign                               | PRIMARY     | 4       | testjobsdb.order_items.order_id  |    1 |   100.00 | NULL                                         |
|  1 | PRIMARY     | recruiters            | NULL       | eq_ref | PRIMARY,recruiters_company_id_foreign                             | PRIMARY     | 4       | testjobsdb.orders.recruiter_id   |    1 |   100.00 | Using where                                  |
|  1 | PRIMARY     | candidate_shortlist   | NULL       | eq_ref | PRIMARY,candidate_shortlist_profile_id_foreign                    | PRIMARY     | 8       | const,testjobsdb.profiles.id     |    1 |   100.00 | Using index                                  |
|  1 | PRIMARY     | profile_industry      | NULL       | ref    | PRIMARY                                                           | PRIMARY     | 4       | testjobsdb.profiles.id           |    4 |   100.00 | Using index; Distinct                        |
|  2 | DERIVED     | candidate_views       | NULL       | ref    | candidate_views_profile_id_foreign,Index 4                        | Index 4     | 4       | const                            |   21 |   100.00 | Using where; Using index                     |
+----+-------------+-----------------------+------------+--------+-------------------------------------------------------------------+-------------+---------+----------------------------------+------+----------+----------------------------------------------+

Note that the junction tables are necessary to create a dynamic search query in php and not appear in the example above, but they would be added to the where clause if a search parameter was entered, for example:

and `profile_contract_type`.`contract_type_id` in (1,2,3,4)

In addition, when I modify the request to perform a count, it takes even longer, approximately 4 to 5 seconds, for example

select count(distinct `profiles`.`id`) as aggregate from `profiles`...

How can I optimize this query. Any help appreciated.

SQL Server 2017 – Analysis of column types of columns used in joins

Is it possible to do a deep type analysis based on source code, views and stored procedures in order to identify column joins that have no compatible types?

I inherited a database which was not consistent in the use of varchar vs nvarchar columns. And sometimes numeric columns are joined to text fields that contain numbers. Now there are changes that bring more tables to the database with millions of records and any difference in the joined types can have huge performance impacts. These tables have varchar fields and it is not possible to change them to nvarchar (they are populated by a third-party product which does not support varchar).

The options I have:

  1. Convert all varchar columns to nvarchar. Unfortunately, this is not possible. It would have been my favorite option.
    1.b Convert all nvarchar columns to varchar. Hmmm … I think it's a rabbit I probably don't want to get into, although all of the textual data is supposed to be in English.
  2. Use the MS Transact SQL analyzer (DacFx) to analyze the queries, however, I think it will take a significant programming effort. I would need a full list of all the SQL Server functions and the types they return.

Any other ideas?

Thank you