combinatorics – I am unable to understand the logic behind the code (I’ve added exact queries as comments in the code)

Our local ninja Naruto is learning to make shadow-clones of himself and is facing a dilemma. He
only has a limited amount of energy (e) to spare that he must entirely distribute among all of his
clones. Moreover, each clone requires at least a certain amount of energy to function (m) . Your job is
to count the number of different ways he can create shadow clones.
Example:

e=7;m=2

ans = 4

The following possibilities occur:
Make 1 clone with 7 energy

Make 2 clones with 2, 5 energy

Make 2 clones with 3, 4 energy

Make 3 clones with 2, 2, 3 energy.

Note: <2, 5> is the same as <5, 2>.
Make sure the ways are not counted multiple times because of different ordering.

Answer

int count(int n, int k){
    if((n<k)||(k<1)) return 0;
    else if ((n==k)||(k==1)) return 1;
    else return count(n-1,k-1)+ count(n-k,k);   // logic behind this?
}

int main()
{
    int e,m;            // e is total energy and m is min energy per clone
    scanf("%d %d", &e, &m);
    int max_clones= e/m;
    int i,ans=0;
    for(i=1;i<=max_clones;i++){
        int available = e - ((m-1)*i);   // why is it (m-1)*i instead of m*i
        ans += count(available, i);
    }
    return 0;
}

query – Why does the underscore character interfere with Dataset queries?

Why does the underscore character affect the result of a Dataset query? Is there a workaround to this problem?

Here is a Dataset dsTable:

{
 {<|"gid" -> "1166182964626283", "name" -> "(no section)"|>},
 {<|"gid" -> "1166182964626284", "name" -> "SETUP"|>},
 {<|"gid" -> "1166182964626285", "name" -> "STRATEGY_VIEW "|>},
 {<|"gid" -> "1166182964626286", "name" -> "OPERATIONS_VIEW"|>},
 {<|"gid" -> "1166182964626287", "name" -> "TECH_VIEW"|>},
 {<|"gid" -> "1166182964626288", "name" -> "LEGAL_VIEW"|>},
 {<|"gid" -> "1166182964626289", "name" -> "FINANCE_VIEW"|>}
}

Now, when I query for “SETUP”…

dsTable(Select(#name == "SETUP" &), {"name", "gid"})

…I get a match, like so:

{<|"name" -> "SETUP", "gid" -> "1166182964626284"|>}

However, when I query for something with an UNDERSCORE in it..

dsTable(Select(#name == "STRATEGY_VIEW" &), {"name", "gid"}) 

I get no matches.

Is there any performance benefit to minifying MySQL queries?

I recently ran across some tools that help you minify queries, and I’m curious to know if that’s actually something that could help performance.

MS SQL Server 2012 – How To Find Last Executed Queries

I have 120+ databases and I want to find last executed queries (actually I want to find last executed queries and last executed select queries) in last 10 days. I have below scripts for this job:

1-) Some database names are NULL and there is no date.

SELECT
    txt.TEXT AS (SQL Statement),
    qs.execution_count (No. Times Executed),
    qs.last_execution_time AS (Last Time Executed), 
    DB_NAME(txt.dbid) AS (Database)
FROM SYS.DM_EXEC_QUERY_STATS AS qs
    CROSS APPLY SYS.DM_EXEC_SQL_TEXT(qs.SQL_HANDLE) AS txt
ORDER BY qs.last_execution_time DESC;

2-) There are no databases’ names and date.

SELECT dest.TEXT AS (Query),
deqs.execution_count (Count),
deqs.last_execution_time AS (Time)
FROM sys.dm_exec_query_stats AS deqs
CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest
ORDER BY deqs.last_execution_time DESC;

3-)

SELECT deqs.last_execution_time AS (Time), dest.text AS (Query), dest.*
FROM sys.dm_exec_query_stats AS deqs
CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest
WHERE dest.dbid = DB_ID('dbname')
ORDER BY deqs.last_execution_time DESC

4-) There is no database names.

SELECT        SQLTEXT.text, STATS.last_execution_time
FROM          sys.dm_exec_query_stats STATS
CROSS APPLY   sys.dm_exec_sql_text(STATS.sql_handle) AS SQLTEXT
WHERE         STATS.last_execution_time > GETDATE()-1
ORDER BY      STATS.last_execution_time DESC

5-) Below query runs but it’s too slow. It executing about 3 mins and it couldn’t even finish the part just for the last day.

SELECT 
sql_text.text, 
st.last_execution_time,
DB_NAME(qp.dbid) as databasename
FROM sys.dm_exec_query_stats st 
CROSS APPLY sys.dm_exec_sql_text(st.sql_handle) AS sql_text
INNER JOIN sys.dm_exec_cached_plans cp
ON cp.plan_handle = st.plan_handle
CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) as qp
WHERE st.last_execution_time >= DATEADD(week, -1, getdate())
ORDER BY last_execution_time DESC;

I want to order my columns like that: Text – Last Execution Time – DB Name

But I’m not an SQL man and I didn’t find any clear solution for this. How can I combine these scripts?

Regards,

Is there a search engine with reverse search mechanism: processing multiple queries and retrieving a single result for each of them?

A regular search engine retrieves a number of results matching a single query. I need one that does the opposite: enables to input a number of queries and displays a single match to each of them. If there is no match, I need to know it too. An implementation of this approach exists, but its application is limited to the database embedding it (PubMed). Batch Citation Matcher enables a user to identify unique content in databases covering material also present in PubMed. A user can submit a text file and the engine transforms the content of the file separated in a required way into a number of queries, then displays a table with queries in one column and the information on whether the text of the queries was found in a database being compared to PubMed in another column. Is there an analogues application which can be applied to any website?

How can I log which queries are in a distributed transaction on an MS SQL server?

I am looking at migrating a database from a self-hosted cluster to Microsoft Azure SQL. I am aware that there are a few distributed transactions involved, which isn’t supported on Azure SQL.

Is there a way that I can log all distributed transactions and their queries, so that I can inspect the client application and remove the requirement for distributed transactions?

python – Using structured queries to geocode records in a pandas data frame using GeoPy

I would like to use structured queries for geocoding in GeoPy, and I would like to run it on a large number of observations. I don't know how to make these requests using a pandas data frame (or something that can be easily transformed to and from a pandas data frame).

First of all, some have implemented:

from geopy.extra.rate_limiter import RateLimiter
from geopy.geocoders import Nominatim
Ngeolocator = Nominatim(user_agent="myGeocoder")
Ngeocode = RateLimiter(Ngeolocator.geocode, min_delay_seconds=1)

df = pandas.DataFrame(("Bob", "Joe", "Ed"))
df("CLEANtown") = ('Harmony', 'Fargo', '')
df("CLEANcounty") = ('', '', 'Traill')
df("CLEANstate") = ('Minnesota', 'North Dakota', 'North Dakota')
df("full")=('Harmony, Minnesota','Fargo, North Dakota','Traill County, North Dakota')
df.columns = ("name") + list(df.columns(1:))

I know how to run a structured query in one place by providing a dictionary. That is to say:

q={'city':'Harmony', 'county':'', 'state':'Minnesota'}
testN=Ngeocode(q,addressdetails=True)

And I know how to geocod from the data frame using just a single column filled with strings. That is to say:

df('easycode') = df('full').apply(lambda x: Ngeocode(x, language='en',addressdetails=True).raw)

But how do you transform the CLEANtown, CLEANcounty and CLEANstate columns into line-by-line dictionaries, use these dictionaries as structured queries and replace the results in the pandas data frame?

Thank you!

mysql – UNION ALL vs CASE – Optimization of SQL queries

I have to get the number of the two approved and Disapproved data from my table.
And I tried both requests and succeeded. Which query is optimized and why. Please give a suggestion

First request

SELECT 
    COUNT(CASE WHEN approved=1 THEN id END) as approved,
    COUNT(CASE WHEN approved=0 and deleted_at is null THEN id END) as unapproved
    FROM my_table

Second request

 SELECT COUNT(id)
 FROM my_table
 WHERE approved=1 
 UNION ALL
 SELECT COUNT(id)
 FROM my_table
 WHERE approved=0 and deleted_at is null

I executed the command explain extended and attach the screenshot

enter description of image here

With the SQL Server query store, how can cleanup avoid deleting the "cheapest" queries?

I'm only interested in resource consumption (QUERY_CAPTURE_MODE = AUTO), with a fairly short interval (for example INTERVAL_LENGTH_MINUTES = 5). I will retrieve data from the query store on a fairly regular basis, as daily, by collecting statistics from all completed intervals not retrieved before. My concern is that the docs say "Size-based cleaning first removes the cheapest and oldest queries." I definitely want to delete the statistics for old queries (older than my recovery interval), but if a ton of low resource queries have been made in the intervals that I haven't recovered yet, I don't want that these be removed, as they could add up to a high total consumption.

Is there a way to force delete statistics for old queries, or something else i can do to ensure that when cleaning up it doesn't delete statistics for intervals i don't have yet recovered?

sql – I'm trying to understand the difference between the following queries

I have a seller and a customer table, and I'm trying to find all the sellers who live in one of the cities in which a customer lives (note that the customer is associated with a specific seller but for now we are just comparing the city)

CREATION OF THE TABLE (ORACLE)

CREATE TABLE SALESMAN (  
    SALESMAN_ID INT CONSTRAINT SALESMAN_PK PRIMARY KEY,  
    NAME VARCHAR2(15),  
    CITY VARCHAR2(10),  
    COMMISSION DECIMAL(4,2))
;

INSERT ALL  
    INTO SALESMAN VALUES(5001,'JAMES HOOG','NEW YORK',0.15)  
    INTO SALESMAN VALUES(5002,'NAIL KNITE','PARIS',0.13)  
    INTO SALESMAN VALUES(5005,'PIT ALEX','LONDON',0.11)  
    INTO SALESMAN VALUES(5006,'MC LYON','PARIS',0.14)  
    INTO SALESMAN VALUES(5003,'LAUSON HEN','SAN JOSE',0.12)  
    INTO SALESMAN VALUES(5007,'PAUL ADAM','ROME',0.13)  
SELECT * FROM DUAL
;

CREATE TABLE CUSTOMER (   
    CUSTOMER_ID INT CONSTRAINT CUSTOMER_PK PRIMARY KEY,   
    CUST_NAME VARCHAR2(15),   
    CITY VARCHAR(10),   
    GRADE INT,   
    SALESMAN_ID INT,  
    CONSTRAINT FK_CUSTOMER_SALESMAN  
    FOREIGN KEY (SALESMAN_ID) REFERENCES SALESMAN (SALESMAN_ID)) 
;

INSERT ALL   
    INTO CUSTOMER VALUES (3002, 'NICK RIMANDO', 'NEW YORK', 100, 5001)   
    INTO CUSTOMER VALUES (3007, 'BRAD DAVIS', 'NEW YORK', 200, 5001)   
    INTO CUSTOMER VALUES (3005, 'GRAHAM ZUSI', 'CALIFORNIA', 200,5002)   
    INTO CUSTOMER VALUES (3008, 'JULIAN GREEN', 'LONDON', 300,5002)   
    INTO CUSTOMER VALUES (3004, 'FABIAN JOHSON', 'PARIS',300,5006)   
    INTO CUSTOMER VALUES (3009, 'GEOFF CAMEROON', 'BERLIN', 100,5003)   
    INTO CUSTOMER VALUES (3003, 'JOZY ALTIDOR', 'MOSCOW', 200,5007)   
    INTO CUSTOMER VALUES (3001, 'BRAD GUZAN', 'LONDON',NULL,5005)   
SELECT * FROM DUAL
;

SELECT * FROM SALESMAN;

SALESMAN_ID       NAME  CITY    COMMISSION
5001         JAMES HOOG NEW YORK    .15
5002         NAIL KNITE PARIS       .13
5005          PIT ALEX  LONDON      .11
5006          MC LYON   PARIS       .14
5003        LAUSON HEN  SAN JOSE    .12
5007          PAUL ADAM ROME        .13

SELECT * FROM CUSTOMER;

CUSTOMER_ID       CUST_NAME      CITY      GRADE    SALESMAN_ID
3002            NICK RIMANDO    NEW YORK    100       5001
3007             BRAD DAVIS     NEW YORK    200       5001
3005             GRAHAM ZUSI    CALIFORNIA  200       5002
3008            JULIAN GREEN    LONDON      300       5002
3004           FABIAN JOHSON    PARIS       300       5006
3009           GEOFF CAMEROON   BERLIN      100       5003
3003            JOZY ALTIDOR    MOSCOW      200       5007
3001            BRAD GUZAN      LONDON       -        5005

# EXPECTED OUTPUT
SALESMAN_ID    NAME     CITY      COMMISSION
5001        JAMES HOOG  NEW YORK    .15
5006         MC LYON    PARIS       .14
5005        PIT ALEX    LONDON      .11
5002       NAIL KNITE    PARIS      .13

# QUERY 1

SELECT DISTINCT
SLS.*
FROM SALESMAN SLS, CUSTOMER CUST
WHERE SLS.CITY = CUST.CITY

# QUERY 2
SELECT * FROM SALESMAN SLS  
WHERE EXISTS (SELECT SALESMAN_ID FROM CUSTOMER WHERE SLS.CITY = CUSTOMER.CITY)

# QUERY 3
SELECT * FROM SALESMAN SLS  
WHERE SLS.SALESMAN_ID IN  (SELECT DISTINCT CUST.SALESMAN_ID FROM CUSTOMER CUST WHERE SLS.CITY = CUST.CITY)
;

# OUTPUT FROM QUERY 3
SALESMAN_ID      NAME       CITY    COMMISSION
5001         JAMES HOOG   NEW YORK  .15
5006        MC LYON        PARIS    .14
5005        PIT ALEX      LONDON    .11

In the three requests above, requests 1 and 2 give the expected output, but request 3 does not provide the expected output. All the requests have the same join between the city of the seller and the customer table, but I don't understand why request 3 gives a different output.