sql server – Average Waiting Tasks Count / Resource Waits

I’m trying to help troubleshoot a sudden performance problem with SQL Server 2014 SP2 (12.0.5659.1). I don’t have details on what exactly the scheduled stored procedures are doing, but I know that the SP(s) with problems are using OLEDB and remotely querying a SQL server to refresh local tables. Execution that used to run for less than an hour, now takes days, basically never completing. We’ve verified that no physical network issue exists.

Looking at the Activity Monitor the things that stand out are:

Waiting Tasks averaging 9 or 10.

Network I/O and SQLCLR both have Wait Times in the 2000-4000 ms/sec with avg Waiter counts of 3 or 4.

The Cumulative Wait Times on both SQLCLR and Network I/O are in the neighborhood of 500K seconds.

Latch also looks pretty high 2600 ms/sec, waiter count at 3-5.

Are these way out of line or am I barking up the wrong tree?

sql – How do I extract data from a main table based on this specific scenario?

I am using SQL Server 2014. I have a table (T1) which contains the list of IDs of cancelled bookings and their equivalent re-bookings.

Extract of table T1:

CancelledID       Re-bookingID
  301                754
  387                801
  400                900
  ...

Each CancelledID has a unique equivalent in the Re-bookingID column.

I have another table (T2) which contains the list of ALL BookingIDs with additional information related to each ID. An extract of this table is shown below:

BookingID     MonthOfStay      RoomNights
...
301             2019-03-01        10
387             2019-04-01         7
400             2019-03-01         5
754             2019-08-01        10
801             2019-09-01         3
900             2019-07-01         5
900             2019-08-01         4
...   

I need a t-sql query which will me the following output:

  BookingID       Cancelled_MonthOfStay     Re-booking_MonthOfStay     RoomNights
    301                2019-03-01                                           10
    387                2019-04-01                                            7
    400                2019-03-01                                            5
    754                                           2019-08-01                10
    801                                           2019-09-01                 3
    900                                           2019-07-01                 5
    900                                           2019-08-01                 4

As you can see, a re-booking can span over 2 months with additional room nights.

I am thinking about “Joins” between the 2 tables but I am stuck at the logic to be used for the “Joins” (if that is the right way of tackling the problem).

Any help would be appreciated.

erro ao inserir dados no mysqlconecctor ,You have an error in your SQL syntax

sql = “””
INSERT INTO ras_relatorio_evento
(ras_rle_id_veiculo ,ras_rle_contador ,ras_rle_data ,ras_rle_data_coleta ,ras_rle_id_indice)
VALUES %s;
“””

    insert = "(9733 ,13 ,'2020-05-27','2020-05-28 17:31:52' ,4794),(9851 ,13 ,'2020-05-27','2020-05-28 17:31:52' ,4794),(9883 ,15 ,'2020-05-27','2020-05-28 17:31:52' ,4794)"

    cursor.execute(sql,(insert,))

sql server – Why is it important for every table to have a primary key?

So I have some tables in Sql Server that are essentially a list of sales, things like:

ProductID 
SalesOrderID
ProductFamilyID 
ProductCost
ProductSource

and so on. In this case, none of the columns are necessarily unique, so I can’t create a primary key from any combination of them. In fact, the only constraint that I really have on the table is that I need every row in the table to be a unique combination of the columns. So I’m assuming something like a unique index would be the way to go there.

The only primary key I could add is something like an autoincrement primary key. But what would be the actual use of that, database wise? What are the possible problems with not creating a primary key for a table like this?

sql – How to write inner join using posts_clauses?

Can someone please show me how I can achieve the same result with this SQL:

SELECT *
FROM wp_2_posts
INNER JOIN wp_2_icl_translations
ON wp_2_icl_translations.element_id = wp_2_posts.id
AND wp_2_icl_translations.language_code = 'en'
WHERE wp_2_posts.post_type = 'properties';

Using posts_clauses ? In other words I’d like a posts_clauses filter to perform the same query as the one shown above in SQL.

I am trying to query posts that are only in English.

mysql – how to display item wise count report using SQL query

I Want to write a query for a particular result

created a table with three columns
1.Category(items value-Water pump,cutting motor,driller)
2.purchased Date(value-date)
3.Working properly(value-(Yes/No)

want no. of items in each category with month wise report(working or fault)

Aggregate Query in SQL Developer

Assume you have the following table TAB(name,value)

name value
a        2
a        3
a        2
b        4
b        5
b        5
c        6
c        6
c        6

what are the result sets of the following queries

query 1:

select name, sum(value)
from TAB
group by name
order by name

query 2:

select name, sum(distinct value)
from TAB
group by name
order by name

query 3:

select name, value
from TAB
group by name
order by name

sql server – T-SQL Merge not updating in explicit transaction

We have the following code in a stored procedure which has a merge statement and an update statement to a different table in an explicit transaction. The merge update does not update any data in the inventory.gtin table but the update statement below it works and data is updated in the import.file_data table when the stored procedure is called in a batch job. But, when the stored procedure is executed manually, the merge works fine and updates/inserts data. Is this a bug in the merge statement?

DECLARE @targetVendorCode nvarchar(20)
DECLARE gtin_cursor CURSOR FOR  

        SELECT DISTINCT aVendorCode
        FROM #fileData fd


    OPEN gtin_cursor   
    FETCH NEXT FROM gtin_cursor INTO @targetVendorCode
    WHILE @@FETCH_STATUS = 0   
    BEGIN   

    BEGIN TRANSACTION;

    MERGE (inventory).(gtin) AS target  
    USING (SELECT * FROM #fileData WHERE ngtin IS NOT NULL AND aVendorCode = @targetVendorCode ) AS source
    ON (target.(ngtin) = source.(ngtin))
    WHEN MATCHED AND ( ( source.createDateTime > COALESCE(target.fileCreateDateTime,'04/02/1982') ) OR source.(status) = 'override' ) THEN
        UPDATE SET 
        (skuId) = source.(skuId),  
        (lastUpdateFileId) = source.(fileId),  
        (vendorCode) = COALESCE(source.(aVendorCode),source.(vendorCode)),  
        (lastUpdateDateTime) = source.(lastUpdateDateTime),     
        (fileCreateDateTime)= source.createDateTime
    WHEN NOT MATCHED BY TARGET THEN 
        INSERT (
                (skuId),
                (lastUpdateFileId),
                (vendorCode),
                (lastUpdateDateTime),
                (fileCreateDateTime)
                )  
        VALUES (
                source.(skuId),
                source.(fileId),
                COALESCE(source.(aVendorCode),source.(vendorCode)),
                source.(lastUpdateDateTime),
                source.createDateTime
                ) 
            ;           
        SELECT @action = '({"type":"distribute","actionDateTime":"' + CONVERT(VARCHAR(19), GETUTCDATE(), 120) + '","insertDateTime":"' + CONVERT(VARCHAR(19), GETUTCDATE(), 120) + '","referenceId":' + TRY_CAST(@fileId AS NVARCHAR(20)) + '})'

        UPDATE fd SET (status) = 'processed',(modifiedDateTime) = GETUTCDATE(),(modifiedUserName) = SYSTEM_USER FROM (import).(file_data) fd WHERE fileId = @fileId


    IF @@TRANCOUNT > 0
    BEGIN
        COMMIT TRANSACTION;
    END
    ELSE
    BEGIN
        ROLLBACK TRANSACTION;
    END

    FETCH NEXT FROM gtin_cursor INTO @targetVendorCode
    END   

    CLOSE gtin_cursor   
    DEALLOCATE gtin_cursor

Cloud MS SQL Server db


Hi There,

I’m looking to move the MS SQL Server sales db (60 MB) away from my cloud web server and make it accessible from more than one website (ASP.NET MVC).

The questions are:

  • Can I find a reliable and fast solution under the $50/month? Where? Browsing Google, Azure, AWS prices look high. The only requirement is a couple of weekly backups.

  • Does it worth to switch away from MS SQL Server to save money?

  • From the data security point of view, is it a bad move? Shall I open the local MS SQL Server instance instead?

Thanks!

sql server – SQL Azure Indexes need Rebuilding for Foreign Keys to work

We have a product that runs off SQL databases.

Until recently, each of our clients had their own fileserver running SQL Server, however we have now started hosting on Azure for some existing and some new clients.

Attempting to roll out the latest version of the software to our clients, we’ve run into an issue with some (but not all!) of the Azure-hosted databases.

The update required adding some new tables, which had Foreign Keys to some existing tables. This worked fine on all the SQL Server hosted databases, but on some of the Azure hosted databases we got the following error:-

There are no primary or candidate keys in the referenced table ‘TableName’ that match the referencing column list in the foreign key ‘FK_Name’

But there were definitely valid PKs, all the databases had the same schema, and only some had issues. Those that had issues also reported the issue against different tables.

The solution turned out to be that the indexes needed rebuilding on some of the tables:-

ALTER INDEX ALL ON TableName REBUILD;

In some cases, more than one index needed to be rebuild.

Once that was done, the scripts to create the new tables ran without issue.

All of the Azure databases are quite new, and most of them barely have any data in them. Half a dozen records in the tables affected, in some cases, so not a case of indexes getting too big, or too fragmented.

The tables themselves worked fine, before and after the indexes were rebuilt, and the software didn’t throw any errors.

The only obvious issue related to this is being unable to reference the PKs with FKs.

However, we’ve obviously concerned that there is some underlying issue with Azure and indexes, and that we could face worse problems down the line.

Has anyone else experienced this issue?

Is there something we need to be doing on Azure to stop indexes breaking?

I ran overnight jobs on all the Azure databases to rebuild all indexes, so hopefully we’re now good, but I’d still like to know what happened, and why, and how to stop it happening again.

TL:DR;

Different indexes, on different tables, on different Azure databases, have broken somehow in the last few weeks (newest DB) to months (oldest DB), and we’d kinda like to know why!