How to determine cardinality for this case?

Apartment Rental System

For the relationship Tenant makes Payment, my thoughts are that multiple tenants can make multiple payments (monthly rent) and then many payments made to owner. I am not sure whether it will be a good database design. What can I do to improve the diagram ?

solution verification – Proof that the cardinality of the cipher text set is greater than or equal to the cardinality of the plaintext set for a symmetric cryptosystem

I have been working on this problem and would appreciate some help

Consider a symmetric cryptosystem with non-empty finite sets $P, C, K$ of plaintexts, ciphertexts, and keys, respectively and with encryption functions $e_k : P rightarrow C$ and decryption functions $d_k : C rightarrow P$ indexed by keys $k in K$.

Show that $|C| geqq |P|$


Here is my attempt:

Since $d_k$ is the left inverse of $e_k$

Suppose $e_k(x) = e_k(y)$ then $x = d_k(e_k(x)) = d_k(e_k(y)) = y$ meaning $e_k$ is injective $forall k in K$

Also, $e_k(p) = c in C, forall p in P$

Suppose, for finite sets $P$ and $C$, $|P| gt |C|$

Then $exists a,b in P$ s.t $a neq b$ and $e_k(x) = e_k(y)$ which raises a contradiction (as it would mean $e_k$ isnt injective)

Thus, $P notgt C$ i.e $|P| leqq |C|$ or $|C| geqq |P|$

Is this the correct answer for the cardinality of this set?

This is a question from a practice quiz at my university.

v

Is the question asking for the cardinality of Σ1 = {a,b} to the power of four?

if that’s the case, then the set would still have a cardinality of 2 since elements in a set are unique. It wouldn’t be {a,b,a,b,a,b,a,b} right?

Efficient cardinality of set overlap relation

Assume that we have a set S of sets s.

Every pair (s,s') in SxS can be overlapping or not.

How can we efficiently compute the number of pairs (s,s') that are overlapping, i.e. that share at least one element?

Additional: actually each set s occurs a number of times (S is a multiset). I have tried to create a matrix M of scipy.sparse.csr_matrix that stores the subset partial order over S. Then I have tried to add the additional edges for overlaps through M.T.dot(M) to then later compute f.dot(M).dot(f.T), where f gives the frequency for each s. Unfortunately M.T.dot(M) becomes too large, so if anything, we probably have to propagate the counts through the subset partial order one after the other.

Hints: it might seem apparent at first to just take the sum over the union of all elements in all s of their frequency squared to sum up the number of pairs sharing one particular element. However the problem is that this counts many pairs multiple times. For example {a,b} is counted for a and also for b. This is why it seems important to use the subset partial order.

Any ideas?

Matching with specific cardinality

In a weighted graph $G(mathcal{V},mathcal{E})$ where $w(i,j)$ is the weight of the edge $(i,j) in mathcal{E}$. How can I find a maximum weighted matching with a specific size (i.e specific cardinality).

maximum cardinality weighted matching

I am looking for a reference for maximum cardinality weighted matching and the best running time algorithm for it. I searched but there is always maximum weighted matching which means the matching has maximum weight but may not has maximum cardinality. I appreciate it if you can recommend a reference for the maximum cardinality weighted matching.

SQL Server Cardinality Estimation Warning

Sometimes the warning is nothing more than that, just a warning, and not something actually affecting performance.

The two things that are most likely affecting your performance is the Table Variable, and the fact you’re looping instead of using a more relational solution. So I’d first run a SELECT * INTO #images FROM @images to put it inside of a Temp Table before your WHILE loop, and then use that Temp Table inside the loop instead, to potentially improve performance.

To answer your question though, I believe the fact that your imageid is an INT but you’re using it in a string function like CONCAT() is where the Implicit Conversion is coming from that is inducing that Cardinality Estimate warning. If you stored a copy of it in your @images table variable already casted as a string data type that was the same type as the extension field and used it in the CONCAT() function instead then the warning should go away.

Also, Table Variables typically result in poor Cardinality Estimates themselves because of their lack of statistics, which may be why the “Estimated Number of Rows Per Execution” is showing 1. (Note there’s been improvements in SQL Server 2019.)

analytic number theory – Do we have any result proving a strong upper bound on the cardinality of set $P_{alpha}(x)$ for some large parameter $x$?

Define $x_0=0$ and $x_{i+1} = P(x_i)$ for all integers $i ge 0$.
Let $l(p)$ to be the least positive integer such that $p|x_{l(p)}$ for some prime $p$.

Then if we let
$$P_{(0<alpha<1)} = { pin mathbb{P}mid l(p)<p^{alpha}}$$ where $mathbb{P}$ is the set of primes, do we have any result proving a strong upper bound on the cardinality of set $P_{alpha}(x)$ for some large parameter $x$?

MySQL multiple index columns have a full cardinality?

I have noticed indexes like:

| archive_city_item |          1 | ARCHIVE_CITY_ITEM_IDX |            1 | city_id        | A         |        7851 |     NULL | NULL   | YES  | BTREE      |         |               |
| archive_city_item |          1 | ARCHIVE_CITY_ITEM_IDX |            2 | item_id        | A         |      266502 |     NULL | NULL   | YES  | BTREE      |         |               |
| archive_city_item |          1 | ARCHIVE_CITY_ITEM_IDX |            3 | vote_date      | A         |     4530535 |     NULL | NULL   | YES  | BTREE      |         |               |
| archive_city_item |          1 | ARCHIVE_CITY_ITEM_IDX |            4 | ip_address_str | A         |     4530535 |     NULL | NULL   | YES  | BTREE      |         |               |
| archive_city_item |          1 | ARCHIVE_CITY_ITEM_IDX |            5 | month          | A         |     4530535 |     NULL | NULL   | YES  | BTREE      |         |               |
| archive_city_item |          1 | ARCHIVE_CITY_ITEM_IDX |            6 | year           | A         |     4530535 |     NULL | NULL   | YES  | BTREE      |         |               |

I guess since column 3-6 have a full cardinality this index is not goodly constructed, that it shall be deleted and the new indexes should be built (queries should be analyzed to see if it’s (city_id, item_id) or maybe (city_id, item_id, votedate) or some other combination starting with (city_id, item_id…) ?

Correct? Or I’m getting this wrong?

Cardinality of sub fields

In code, can I set cardinality of a subfield of a custom field? So the main parent field could be a single instance while some of its subfields could have multiple instances, with an ‘Add more’ button
-Drupal 8/9
-Not a paragraph question