maximum cardinality weighted matching

I am looking for a reference for maximum cardinality weighted matching and the best running time algorithm for it. I searched but there is always maximum weighted matching which means the matching has maximum weight but may not has maximum cardinality. I appreciate it if you can recommend a reference for the maximum cardinality weighted matching.

SQL Server Cardinality Estimation Warning

Sometimes the warning is nothing more than that, just a warning, and not something actually affecting performance.

The two things that are most likely affecting your performance is the Table Variable, and the fact you’re looping instead of using a more relational solution. So I’d first run a SELECT * INTO #images FROM @images to put it inside of a Temp Table before your WHILE loop, and then use that Temp Table inside the loop instead, to potentially improve performance.

To answer your question though, I believe the fact that your imageid is an INT but you’re using it in a string function like CONCAT() is where the Implicit Conversion is coming from that is inducing that Cardinality Estimate warning. If you stored a copy of it in your @images table variable already casted as a string data type that was the same type as the extension field and used it in the CONCAT() function instead then the warning should go away.

Also, Table Variables typically result in poor Cardinality Estimates themselves because of their lack of statistics, which may be why the “Estimated Number of Rows Per Execution” is showing 1. (Note there’s been improvements in SQL Server 2019.)

analytic number theory – Do we have any result proving a strong upper bound on the cardinality of set $P_{alpha}(x)$ for some large parameter $x$?

Define $x_0=0$ and $x_{i+1} = P(x_i)$ for all integers $i ge 0$.
Let $l(p)$ to be the least positive integer such that $p|x_{l(p)}$ for some prime $p$.

Then if we let
$$P_{(0<alpha<1)} = { pin mathbb{P}mid l(p)<p^{alpha}}$$ where $mathbb{P}$ is the set of primes, do we have any result proving a strong upper bound on the cardinality of set $P_{alpha}(x)$ for some large parameter $x$?

MySQL multiple index columns have a full cardinality?

I have noticed indexes like:

| archive_city_item |          1 | ARCHIVE_CITY_ITEM_IDX |            1 | city_id        | A         |        7851 |     NULL | NULL   | YES  | BTREE      |         |               |
| archive_city_item |          1 | ARCHIVE_CITY_ITEM_IDX |            2 | item_id        | A         |      266502 |     NULL | NULL   | YES  | BTREE      |         |               |
| archive_city_item |          1 | ARCHIVE_CITY_ITEM_IDX |            3 | vote_date      | A         |     4530535 |     NULL | NULL   | YES  | BTREE      |         |               |
| archive_city_item |          1 | ARCHIVE_CITY_ITEM_IDX |            4 | ip_address_str | A         |     4530535 |     NULL | NULL   | YES  | BTREE      |         |               |
| archive_city_item |          1 | ARCHIVE_CITY_ITEM_IDX |            5 | month          | A         |     4530535 |     NULL | NULL   | YES  | BTREE      |         |               |
| archive_city_item |          1 | ARCHIVE_CITY_ITEM_IDX |            6 | year           | A         |     4530535 |     NULL | NULL   | YES  | BTREE      |         |               |

I guess since column 3-6 have a full cardinality this index is not goodly constructed, that it shall be deleted and the new indexes should be built (queries should be analyzed to see if it’s (city_id, item_id) or maybe (city_id, item_id, votedate) or some other combination starting with (city_id, item_id…) ?

Correct? Or I’m getting this wrong?

Cardinality of sub fields

In code, can I set cardinality of a subfield of a custom field? So the main parent field could be a single instance while some of its subfields could have multiple instances, with an ‘Add more’ button
-Drupal 8/9
-Not a paragraph question

graphs – Maximum cardinality matching of maximum weight

Given a graph, I want to find the matching with the maximum size in terms of edges, but among those matchings, given a real weight function on the edges, the one with the maximum weight. Is there an algorithm that can find such a matching in polynomial time?


To make things more concrete, here is an example bi-partite graph where the edge between 1->5 is poisoned; if you select it, you won’t get a matching of largest possible size (which is 3). But if we gave this poisoned edge a large weight, the maximal weight matching will be forced to pick it, making the size of the matching less than it possibly could be.


Side note: these questions are readily extended to the min edge cover, min weighted edge cover and “is there a polynomial time algorithm for min edge cover with minimal weight”.

enter image description here

Proving cardinality given surjection from A to B and B to A

Suppose f:A→B and g:B→A are both surjective, does this imply that there is a bijection between A and B.

I was told that the statement above is true only with Axiom of Choice. Can someone provide an example of why that is the case?

automata – Cardinality of formal languages

The thing you need to remember is that even if Σ has only one symbol , Σ* still has infinite strings over that one symbol , even if Σ = {a} , Σ* = {ε,a,aa,aaa,….} , (a^n , n≥ 0) , Σ* has infinite strings just like with any other alphabet , if now you are convinced that Σ* is infinite even if |Σ|=1 just like any other alphabet , then you can form infinite languages over that alphabet , P(Σ*) is infinite if Σ* is infinite , and we have shown that Σ* is infinite even for |Σ| =1

logic – Is cardinality a scam?

To define cardinality we need the axiom of choice (AC). The way that AC / well ordering work is that any ordering of infinite sets can extend any ordering of finite sets. The unwritten assumption of cardinality is that cardinality is more important than some random ordering.

I find it helpful to be clear about this underlying assumption. One way to formalize this assumption into an axiom is to state that Peano’s arithmetic (PA) is true if and only if transfinite arithmetic (ala cardinality) is true. Binding these two sets of rules gives us a justification for relying on cardinality.

This axiom, in addition to ZFC, makes the continuum hypothesis (CH) a theorem. Even now that we know that CH is logically independent of ZFC, many mathematicians still write, talk, and arguably think of CH as a hypothesis. We can forgive Cantor for overlooking this assumption, but we have less reason to be ignorant.

While new ideas should prove themselves (especially for abductive and inductive reasoning), the most important principle for mathematics is not to assume more than is needed.

What am I missing?
At least, suggest a better axiom (or is that a reason for the oversight?).

fyi $exists$CH = PNP

database design – Determining Cardinality in a Logical Model

How should one determine Cardinality in a Logical Model ?

Should it be based on how the rows of an entity relate to another entity or should we consider the natural relationship between the entities i.e. conceptual relationship between the entities ?

Example: If I have an entity Course and an entity Course Type, what would be the cardinality ? Each course can have only one course type. For example, Bachelor of Arts is a course of course type Bachelors and Master of Science is of course type Masters

If I have Course Type as part of Course entity, then Course Type would only contain list of valid course types and it would be “many-to-one” (non-identifying) as there are many courses which will could 1 course type.

On the other hand if I model it in such a way that Course Type entity has Course ID (foreign Key) and Course Type , then the relationships between Course and Course Type is “one-to-one” (identifying).

Basically what I am trying to understand is, which one the following is right ?

each course has one course type OR many courses have one course type

How should one make this decision ? Are there any guidelines ?

P.S. : I am a beginner and using Oracle Data Modeler