Dynamic SQL generation is not supported against multiple base tables in C#

I am running into the issue as stated in the title. I do realize I have to change my sql statement but I am not sure how or what I am supposed to change.

Here is my code :

/****** Object:  StoredProcedure (dbo).(sp_Get_CSF_Transactions_MTPT)    Script Date: 10/22/2020 3:08:59 PM ******/
-- =============================================
-- Author:      <Author,,Name>
-- Create date: <Create Date,,>
-- Description: <Description,,>
-- =============================================
ALTER PROCEDURE (dbo).(sp_Get_CSF_Transactions_MTPT)

    -- SET NOCOUNT ON added to prevent extra result sets from
    -- interfering with SELECT statements.

    -- Insert statements for procedure here
    SELECT  DISTINCT  EOD_CSF_Archive.ID, EOD_CSF_Archive.RecID, EOD_CSF_Archive.mercid, EOD_CSF_Archive.termid, EOD_CSF_Archive.proccode, EOD_CSF_Archive.trandt, EOD_CSF_Archive.trantm, EOD_CSF_Archive.pan, EOD_CSF_Archive.tranamt, EOD_CSF_Archive.trandbcr, EOD_CSF_Archive.tranrc, EOD_CSF_Archive.merccurr, EOD_CSF_Archive.customer, EOD_CSF_Archive.wkiv,
                      EOD_CSF_Archive.Merc_Settled, EOD_CSF_Archive.Proc_Date_Merc_Settled, EOD_CSF_Archive.DataWarehouse, EOD_CSF_Archive.DataWarehouseDate, EOD_CSF_Archive.TransactionIdentifier, EOD_CSF_Archive.network, EOD_CSF_Archive.ProdDesc, EOD_CSF_Archive.tranauth, EOD_CSF_Archive.transeq, EOD_Mr_Payments.AccountNumber
FROM         EOD_CSF_Archive
 JOIN EOD_Mr_Payments on EOD_CSF_Archive.EntryID = EOD_Mr_Payments.EntryID
WHERE  (EOD_CSF_Archive.DataWarehouse = 0 OR EOD_CSF_Archive.DataWarehouse IS NULL) and Merc_Settled = 1
order by mercid, termid

Any help would be greatly appreciated!

sql server – Calendar Event table – best practice setup for range queries and individual retrieval

This seems like a generic problem that should have been solved already, but I can’t find anything about this. In general this question is – given a table where data is read by a date range, what is the best, most efficient setup?

We have a calendar event table that will quickly grow to millions of records.

The schema is something like:

CREATE TABLE (dbo).(CalendarEvent)(
(Id) (uniqueidentifier) NOT NULL,
(DtStart) (datetime) NULL,
(DtEnd) (datetime) NULL,
(Created) (datetime) NULL,
(LastModified) (datetime) NULL,
(CalendarEventType) (nvarchar)(255) NULL,
(CalendarId) (uniqueidentifier) NULL
    (Id) ASC

Forget about recurring events, etc. as that doesn’t bear on our problem.

Most queries will be of the type:

select * from CalendarEvent where CalendarId = 'b5d6338f-805f-4717-9c0a-4600f95ac515' AND dtStart > '01/01/2020' AND dtStart < '10/22/2020'

Notice no joins, etc.

But we will also have some that select for individual events, and include joins:

select * from CalendarEvent ce join tags t on ce.Id = t.CalendarEventId where Id = '17606330-5486-496a-a91c-f5d0e123bfff'

Questions and ideas:

  1. Should we keep the Id as the PK, but make the start date the clustered index?
  2. Should we just make an index on dtStart?
  3. Should we partition by month?
  4. Should we denormalize a little and break duplicate the dtStart data by include year and month columns that we can index and use in our range queries?

In general, when you do your querying on a table by date range, what is the best setup for this type of table?

Note: If you think this question could be improved to help more people, make it more generic and widely applicable, such as removing references to a Calendar Event table specifically, and making this just about date range querying in any type of table, please help me do that.

SQL injection without the use of union and select

I was wondering if there is a way to use SELECT and UNION keywords without being caught by a algorithm (see below) that filters these keywords out.

 $filter = array('UNION', 'SELECT');

    // Remove all banned characters
    foreach ($filter as $banned) {
        if (strpos($_GET('q'), $banned) !== false) die("Hacker detected"); 
        if (strpos($_GET('q'), strtolower($banned)) !== false) die("Hacker detected"); 

sql – Is deterministic SELECT possible without specifying an ORDER BY?

When we show a table with many thousands of records, we only show a small part of the total result set so as not to send too much data to the client at once. Clients can request more parts of the result set through pagination.

This is how a typical query for such a table looks:

SELECT `expression1`, `expression2` FROM table (JOINS) ORDER BY table.id LIMIT 0,100

The LIMIT clause restricts the result set so that the table does not become too big. The ORDER BY clause enforces a deterministic order.

I have noticed that these queries can become very slow when several database tables are joined and at least one of them contains a large number of rows. Leaving out the ORDER BY clause greatly improves the speed, up to 10000%. Apart from not having to sort, I assume the query optimizes by executing the LIMIT clause before the SELECT clause.

Ironically, it is not necessary that the result set is ordered by id. We just need a deterministic order to facilitate reliable pagination, for example one that follows the order of the primary key or a predetermined index.

Is there an instruction to achieve deterministic order without the ORDER BY clause?

sql – Dado o diagrama ER, alguém pode me fornecer o código para SQLite para encontrar a consulta conforme perguntado

Dado o diagrama ER, alguém pode me fornecer o código para SQLite para encontrar a consulta conforme perguntado.

inserir a descrição da imagem aqui

Quais são os gêneros musicais mais comuns nas playlists hospedadas?
Qual é o álbum mais longo do catálogo? Quantos minutos dura?
Qual é o artista/banda com o maior número de álbuns disponíveis?

estou agarrado nessas questões, agradeço a colaboração de todos.

SQL Server RDS Database Restore from S3 stuck in Restore Mode

We are doing a restore of a SQL Server database from S3 on an RDS Instance. And all though the job will say the restore is complete when we go to access the database it’s stuck in “Restore” Mode and we can’t do anything with it.

We have spun-up a new SQL Server RDS Instance and we’re trying to restore a database from back-up stored in S3. Which is simple enough, we had to do it in the past before maybe a year ago or so. We run the command from the AWS Documentation

EXEC msdb.dbo.rds_restore_database
 @restore_db_name = 'OurDB'
, @s3_arn_to_restore_from = 'arn:aws:s3:::bucket/SQLBackUp.bak'
, @with_norecovery = 1

And we know that command won’t work if the Options Group and IAM roles aren’t set up, so we’ve made sure they are. When we run the command it takes a few minutes but appears to run fine.

We run the command to check on the status

EXEC msdb.dbo.rds_task_status

And what as % complete grows over the next few minutes to 100% and the lifecycle value says “SUCCESS”. However when I try to access the database, I’m unable to and it says it’s stuck restoring and the exact error message I get is “The database OurDB is not accessible. (ObjectExplorer)”

Whenever I search RESTORE DATABASE Stuck RDS SQL SERVER or variations of that, all the stuff I find is about doing a restore of the entire RDS Instance and not an individual database. Or if I’m able to find something about restoring a database that’s stuck then it’s not about RDS but about regular SQL Server.

Here are some screenshots of what I’m seeing too.

SQL Commands and Results
Error in SSMS

sql server – BULK INSERT continue on PRIMARY KEY error

I get a CSV file with a several million records every few days. I need to insert them into our database. What I do is quite straightforward:

BULK INSERT dbo.activity
FROM 'C:tmpactivity.csv'
    FIRSTROW = 2,
    ROWTERMINATOR = '0x0a',
    BATCHSIZE = 1000,

This works well. However, it may be the case that I get some duplicated records. When I runt he import I, as expected, get the error:

Violation of PRIMARY KEY constraint 'PK__id__790FF7A****'. Cannot insert duplicate key in object 'dbo.activity'. The duplicate key value is (1234)

This gets quite cumbersome when instead of one duplicated record I have hundreds. Is there any way to tell BULK INSERT to ignore (or log) those errors and carry on with the good records? I am thinking about something like:

  BULK INSERT dbo.activity
  FROM 'C:tmpactivity.csv'
  WITH (options....)
        ERROR_NUMBER() AS ErrorNumber  
       ,ERROR_MESSAGE() AS ErrorMessage;
   -- I know this line wrong.  It is just what I would like to do


Contar rows affected utilizando un proceso almacenado SQL Server

tengo un problema para poder contar las filas que afecta la ejecución de un proceso almacenado, el cual se ejecuta y recibe 6 parámetros. Al ejecutarse, me muestra como resultado de la ejecución la cantodad de filas afectadas, de hecho me muestra 2 valores de filas afectadas.

Necesitaría, poder tomar ese valor que devuelve la ejecución. No estaría seguro si @@ROWCOUNT me estaría contando lo que necesito haciendo agregandolo abajo del EXEC del proceso almacenado, porque me devuelve 0.

Existe alguna manera de poder obtener el valor de filas afectadas que me muestra luego de la ejecución del proceso almacenado?

Muchas gracias de antemano.


sql – How to use a column named as keyword in a DB2 concat function

So I got this table FOO which has a column of the type VARCHAR and is named COMMENT (which happens to be a reserved keyword).
When I am trying to use it in a CONCAT function in my select the result is NULL.

How can I fix this?


I also tried to use ” or ‘ around COMMENT, but then it is interpreted as a VARCHAR…

2nd I used ` but that happens to print me the following error.


I also tried to add the SCHEMA and the TABLE name in front of the column like:


But no luck.

Replication in sql server 2019?

Why is it omitted?

Because the replication NOT considered to be HA and DR solution, it is considered to be Migrate & Load Data feature within SQL Server, which mainly cover the requirement of distributing subset of data into different locations wherein HA and DR solution mainly focuses on whole database/SQL-Instance level Failover capabilities.

Will Replication be deprecated soon?

I don’t think whole replication feature will be depreciated, perhaps some sub-features of replication. for your reference..