How to export products with tier prices directly via database using SQL?

I want to export these data from my Magento 2.3:

  • Status
  • Product Name
  • SKU
  • Price
  • Advanced Pricing
  • Quantity

How could I do it via SQL?

sql server – conversion failed when converting datetime from character string while inserting and updating to database

I’m trying to insert time and date separately and calculate if it is late to the set time and then insert it to database…

here’s my try code for update

string Date = DateTime.Now.ToString(“dd-MM-yyyy”);
string Time = DateTime.Now.ToString(“h:mm:ss tt”);
SqlCommand comm2 = conn.CreateCommand();
comm2.CommandText = “Update Time_Logs SET Time_Out = ‘” + Time + “‘ where Emp_Id = ‘” + EmpId.Text + “‘ and Date = ‘” + Date + “‘”;
catch (Exception x)

and here is the code for inserting

string Date = DateTime.Now.ToString(“dd-MM-yyyy”);
string Time = DateTime.Now.ToString(“h:mm:ss tt”);

        SqlCommand comm = conn.CreateCommand();
        comm.CommandText = "INSERT INTO Time_Logs (Emp_Id, Date, Time_In) VALUES('" + EmpId.Text + "','" + Date + "','" + Time + "')";
            DateTime time = DateTime.Parse(Time);
            DateTime inDate = DateTime.Parse("8:00:00 AM");
            TimeSpan ts1 = inDate.TimeOfDay;
            TimeSpan ts = time - inDate;
            if (ts < ts1)
                SqlCommand comm2 = conn.CreateCommand();
                comm2.CommandText = "Update Time_Logs SET Late = '" + ts + "' where Emp_Id = '" + EmpId.Text + "' and Date = '" + Date + "'";
                SqlCommand comm2 = conn.CreateCommand();
                comm2.CommandText = "Update Time_Logs SET Late = '" + ts + "' where Emp_Id = '" + EmpId.Text + "' and Date = '00:00:00'";
        catch (Exception x)

How can I investigate why Sql Server is chosing the “wrong” Index?

I have a Transaction table with about 200 million records, one primary key clustered on Id and 2 indexes:

  • IX_SiloId_ChangedTime_IncludeTime
  • IX_SiloId_Time_IncludeContent

I run these 2 statements before I proceed with the actual query to update statistics

Update STATISTICS dbo.(Transaction) IX_SiloId_ChangedTime_IncludeTime WITH FULLSCAN
Update STATISTICS dbo.(Transaction) IX_SiloId_Time_IncludeContent WITH FULLSCAN

This is my query:

DECLARE @Query SiloTimeQueryTableType -- (SiloId, Time) with primary key clustered on SiloId
(1, '2020-12-31'), -- 1000 total values, though it's still the same problem with just one

FROM    (Transaction) t
    ON t.SiloId = q.SiloId
    t.Time >= q.Time

Now what happens is for whatever reason Sql Server choses IX_SiloId_ChangedTime_IncludeTime. It then takes forever. If I use WITH (INDEX(IX_SiloId_Time_IncludeContent)) I get the result right away.

The correct index is quite obvious here, but yet SQL Server choses the one that is not even indexed on Time.

I cannot understand this behaviour, but from what I read it is best to avoid hints for Indexes, though I made this Index with this query in mind.

So the question is: what can I do to try to figure out why SQL Server prefers the “wrong” index even though a much better one exists and I just run full statistics update?

Query plan for the forced index (here from the temp table instead of TVP to check if this changes anything as the answer suggested, the result seems to be the same):

enter image description here

Query plan without forced index:

enter image description here (this one is live, as it takes too long)

windows – Do I need a domain controller for multiple PCs accessing the same SQL server db? How do I configure it?

(more details/replicate from:

Dear all,
I’m a total newbie in connecting 2 PCs and making them use the same SQL server DB from one computer … so please support me.
At home, I have 2 PCs from my company (working from home due to Covid19; I don’t have better windows images and I would prefer not changing them, or at least as possible):

PC Windows SQL server SSMS IP
1 7, version 6.1, build 7601 sp1 2014 (installation in progress) not yet
2 10, version1803, build17134.345 2017 18.1

The final goal: to have PC1 act as a db server and PC2 as a client (that can access the server’s dbs). On both PCs I will install the same application which should be able (from any PC) to update the data from PC1/the db server.

I’d been told I would need a domain controller (DC) for this scenario, but I really don’t have any clue what this means or how I should configure it:

  1. do I need a DC?
  2. is this DC something that comes with the windows images?
  3. is this DC something that comes with SQL Server?
  4. is this an application which I’ll need to install separately? (I initially thought that if I’m installing sql server on PC1, I will have to insert the windows credentials from the PC2 and this will suffice)
  5. could you please give me step-by-step on how to make this work?

Thank you!

sql – Azure Data Studio – Como fazer um atalho para abrir procedures?

Fala galera,

Estou usando o Azure Data Studio e sei que existe um atalho (F12) para abrir uma Procedure, o problema é que quando abro via esse atalho a procedure que já existe abre com o comando CREATE no cabeçalho. Eu preciso que um atalho que abra a procedure com o cabeçalho ALTER.

Alguém sabe como?

mysql – sql backup restore converts createdAt to local TZ

I have a remote MySQL DB in the cloud on JAWSDB.

I have createdAt and updatedAt timestamps for records that are in UTC TZ.

I’ll backup these DBs using mysqldump -h HOST -u USER -p > backup.sql. I then restore these to a local MySQL Server using
mysql -u USERNAME -p DB_NAME < backup.sql

However, upon inspecting the new local backup…I notice that the createdAt timestamps are for my local TZ (EST and so currently 5hrs before the UTC version that are actually in JAWSDB). The updatedAt timestamps are still in UTC though.

Here’s how I created those timestamp fields btw.


So it seems like the restore of the dump is converting from UTC to EST just for the createdAt field. Could this be due to an engine or mysql version mismatch between JawsDB and my local server?

Any insights? Thanks!

mysql – ¿Como hago esta consulta en SQL?

¡Gracias por contribuir en StackOverflow en español con una respuesta!

  • Por favor, asegúrate de responder a la pregunta. ¡Proporciona información y comparte tu investigación!

Pero evita

  • Pedir ayuda o aclaraciones, o responder a otras respuestas.
  • Hacer declaraciones basadas en opiniones; asegúrate de respaldarlas con referencias o con tu propia experiencia personal.

Para obtener más información, consulta nuestros consejos sobre cómo escribir grandes respuestas.

SQL Server Agent & sp_send_dbmail failing on permissions

I have set up an SQL Server Agent job to call sp_send_dbmail with a very simple select on a certain table in a certain database. Within the properties->permissions of the target database the Database Role ‘public’ has ‘select’ permission. The job runs fine, the email arrives, all good.

The problem is, if I turn off the worryingly generous ‘public’ select permission and try to add the SQLServerAgent user (which is the user the agent is running under and the user that owns the agent job) and give them select permissions, the job fails.

Can anyone shed any light on this for me? I suspect that maybe a different user is involved in some way. I cannot get SQL Server Profiler to run (it’s not installed) and I don’t have access to the server to install it.


Performance issues since migrating from SQL Server 2012R2 to 2019

I’ve migrated my database from a SQL Server 2012R2 to a new SQL Server 2019. I’ve just backupped the database and restored it on new sql server. After that I’ve imported and updated some additional data (quotes) with IMPORT, INSERT, UPDATE or MERGE. And in between some testers have used the system for some testing. Now performance is on some actions slower than on old server.

  1. First I’ve done some index tuning. Here are some questions regarding this topic:

    Questions on updating statistics and index maintenance jobs

    Index and statistics optimization scripts duration and log bloat problem. Looking for good strategy? (closed)

  2. Second I’ve done the same import done on our test environemnt with SQL Server 2012R2. And there I don’t have the performance issue.

Execution plan on Server 2012R2:
2012R2 Execution plan

Execution plan on Server 2019:
2019 Execution plan

sql server – SQLBulkCopy & InMemory tables

It is possible to use an In-Memory Table with the SQLBulkCopy class. I’m not aware of any additional gotchas but did come across this Git Issue in terms of ensuring you properly configure the options of it (depending on the size of your table).

You guys should more importantly understand the differences between an In-Memory Table vs a regular Table, specifically with regards to locks and retry logic for In-Memory Tables being different, which may be important to consider and cause you to make additional changes to your application.

Additional resources for In-Memory Tables:

  1. Microsoft Docs – Memory Optimized Tables

  2. Microsoft Docs – Transactions with Memory Optimized Tables