I want to export these data from my Magento 2.3:
- Status
- Product Name
- SKU
- Price
- Advanced Pricing
- Quantity
How could I do it via SQL?
New and Fresh Private + Public Proxies Lists Everyday!
Get and Download New Proxies from NewProxyLists.com
I want to export these data from my Magento 2.3:
How could I do it via SQL?
I’m trying to insert time and date separately and calculate if it is late to the set time and then insert it to database…
here’s my try code for update
string Date = DateTime.Now.ToString(“dd-MM-yyyy”);
string Time = DateTime.Now.ToString(“h:mm:ss tt”);
SqlCommand comm2 = conn.CreateCommand();
comm2.CommandText = “Update Time_Logs SET Time_Out = ‘” + Time + “‘ where Emp_Id = ‘” + EmpId.Text + “‘ and Date = ‘” + Date + “‘”;
try
{
conn.Open();
comm2.ExecuteNonQuery();
MessageBox.Show(“Time_Out…”);
conn.Close();
TimeCompute();
}
catch (Exception x)
{
MessageBox.Show(x.Message);
conn.Close();
}
and here is the code for inserting
string Date = DateTime.Now.ToString(“dd-MM-yyyy”);
string Time = DateTime.Now.ToString(“h:mm:ss tt”);
SqlCommand comm = conn.CreateCommand();
comm.CommandText = "INSERT INTO Time_Logs (Emp_Id, Date, Time_In) VALUES('" + EmpId.Text + "','" + Date + "','" + Time + "')";
try
{
conn.Open();
comm.ExecuteNonQuery();
MessageBox.Show("Time_In...");
conn.Close();
DateTime time = DateTime.Parse(Time);
DateTime inDate = DateTime.Parse("8:00:00 AM");
TimeSpan ts1 = inDate.TimeOfDay;
TimeSpan ts = time - inDate;
if (ts < ts1)
{
SqlCommand comm2 = conn.CreateCommand();
comm2.CommandText = "Update Time_Logs SET Late = '" + ts + "' where Emp_Id = '" + EmpId.Text + "' and Date = '" + Date + "'";
conn.Open();
comm2.ExecuteNonQuery();
conn.Close();
}
else
{
SqlCommand comm2 = conn.CreateCommand();
comm2.CommandText = "Update Time_Logs SET Late = '" + ts + "' where Emp_Id = '" + EmpId.Text + "' and Date = '00:00:00'";
conn.Open();
comm2.ExecuteNonQuery();
conn.Close();
}
}
catch (Exception x)
{
MessageBox.Show(x.Message);
conn.Close();
}
I have a Transaction table with about 200 million records, one primary key clustered on Id and 2 indexes:
I run these 2 statements before I proceed with the actual query to update statistics
Update STATISTICS dbo.(Transaction) IX_SiloId_ChangedTime_IncludeTime WITH FULLSCAN
Update STATISTICS dbo.(Transaction) IX_SiloId_Time_IncludeContent WITH FULLSCAN
This is my query:
DECLARE @Query SiloTimeQueryTableType -- (SiloId, Time) with primary key clustered on SiloId
INSERT INTO @Query VALUES
(1, '2020-12-31'), -- 1000 total values, though it's still the same problem with just one
SELECT t.*
FROM (Transaction) t
INNER JOIN @Query q
ON t.SiloId = q.SiloId
WHERE
t.Time >= q.Time
Now what happens is for whatever reason Sql Server choses IX_SiloId_ChangedTime_IncludeTime
. It then takes forever. If I use WITH (INDEX(IX_SiloId_Time_IncludeContent))
I get the result right away.
The correct index is quite obvious here, but yet SQL Server choses the one that is not even indexed on Time.
I cannot understand this behaviour, but from what I read it is best to avoid hints for Indexes, though I made this Index with this query in mind.
So the question is: what can I do to try to figure out why SQL Server prefers the “wrong” index even though a much better one exists and I just run full statistics update?
Query plan for the forced index (here from the temp table instead of TVP to check if this changes anything as the answer suggested, the result seems to be the same):
Query plan without forced index:
https://www.brentozar.com/pastetheplan/?id=rJOt3G00P
https://www.brentozar.com/pastetheplan/?id=ByFshGAAP (this one is live, as it takes too long)
(more details/replicate from: https://superuser.com/questions/1612260)
Dear all,
I’m a total newbie in connecting 2 PCs and making them use the same SQL server DB from one computer … so please support me.
At home, I have 2 PCs from my company (working from home due to Covid19; I don’t have better windows images and I would prefer not changing them, or at least as possible):
PC | Windows | SQL server | SSMS | IP |
---|---|---|---|---|
1 | 7, version 6.1, build 7601 sp1 | 2014 (installation in progress) | not yet | 192.168.0.10 |
2 | 10, version1803, build17134.345 | 2017 | 18.1 | 192.168.0.105 |
The final goal: to have PC1 act as a db server and PC2 as a client (that can access the server’s dbs). On both PCs I will install the same application which should be able (from any PC) to update the data from PC1/the db server.
I’d been told I would need a domain controller (DC) for this scenario, but I really don’t have any clue what this means or how I should configure it:
Thank you!
R
Fala galera,
Estou usando o Azure Data Studio e sei que existe um atalho (F12) para abrir uma Procedure, o problema é que quando abro via esse atalho a procedure que já existe abre com o comando CREATE no cabeçalho. Eu preciso que um atalho que abra a procedure com o cabeçalho ALTER.
Alguém sabe como?
I have a remote MySQL DB in the cloud on JAWSDB.
I have createdAt
and updatedAt
timestamps for records that are in UTC TZ.
I’ll backup these DBs using mysqldump -h HOST -u USER -p > backup.sql
. I then restore these to a local MySQL Server using
mysql -u USERNAME -p DB_NAME < backup.sql
However, upon inspecting the new local backup…I notice that the createdAt
timestamps are for my local TZ (EST and so currently 5hrs before the UTC version that are actually in JAWSDB). The updatedAt
timestamps are still in UTC
though.
Here’s how I created those timestamp fields btw.
`createdAt` TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
`updatedAt` DATETIME DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
So it seems like the restore of the dump is converting from UTC to EST just for the createdAt
field. Could this be due to an engine or mysql version mismatch between JawsDB and my local server?
Any insights? Thanks!
¡Gracias por contribuir en StackOverflow en español con una respuesta!
Pero evita…
Para obtener más información, consulta nuestros consejos sobre cómo escribir grandes respuestas.
I have set up an SQL Server Agent job to call sp_send_dbmail with a very simple select on a certain table in a certain database. Within the properties->permissions of the target database the Database Role ‘public’ has ‘select’ permission. The job runs fine, the email arrives, all good.
The problem is, if I turn off the worryingly generous ‘public’ select permission and try to add the SQLServerAgent user (which is the user the agent is running under and the user that owns the agent job) and give them select permissions, the job fails.
Can anyone shed any light on this for me? I suspect that maybe a different user is involved in some way. I cannot get SQL Server Profiler to run (it’s not installed) and I don’t have access to the server to install it.
Thanks
C
I’ve migrated my database from a SQL Server 2012R2 to a new SQL Server 2019. I’ve just backupped the database and restored it on new sql server. After that I’ve imported and updated some additional data (quotes) with IMPORT
, INSERT
, UPDATE
or MERGE
. And in between some testers have used the system for some testing. Now performance is on some actions slower than on old server.
First I’ve done some index tuning. Here are some questions regarding this topic:
Questions on updating statistics and index maintenance jobs
Index and statistics optimization scripts duration and log bloat problem. Looking for good strategy? (closed)
Second I’ve done the same import done on our test environemnt with SQL Server 2012R2. And there I don’t have the performance issue.
Execution plan on Server 2012R2:
Execution plan on Server 2019:
It is possible to use an In-Memory Table with the SQLBulkCopy class. I’m not aware of any additional gotchas but did come across this Git Issue in terms of ensuring you properly configure the options of it (depending on the size of your table).
You guys should more importantly understand the differences between an In-Memory Table vs a regular Table, specifically with regards to locks and retry logic for In-Memory Tables being different, which may be important to consider and cause you to make additional changes to your application.
Additional resources for In-Memory Tables:
Microsoft Docs – Memory Optimized Tables
Microsoft Docs – Transactions with Memory Optimized Tables