## make a single figure composed of several plot

I’ve to make a single figure composed of several plot but I can’t do it.
Can anyone help me?
Thank you all

## angular – Mat-select input composed object

I am struggling to input a composed value into a select. Let’s say our object only contain an ID and a name, an usual way to have a working select would be to do :

``````<mat-form-field>
<mat-label>Placeholder</mat-label>
<mat-select>
<mat-option *ngFor="let i of fooCollection" (value)="i.id">
{{i.name}}
</mat-option>
</mat-select>
</mat-form-field>
``````

Now to feed a value, I found this working example in the documentation which simply add a `((value))` option into the `mat-select` tag, but since we got a composed object here it doesn’t work anymre.

Any idea on how to solve this ? Many thanks !
Kev’

## DFA and Regular expression for strings composed of {0,1} with 2 consecutive 0’s or 1’s?

DFA and Regular expression for strings composed of {0,1} with 2 consecutive 0’s or 1’s?
How do you account for the whole string include non-consecutive elements?

Currently I have (aa)|(bb)

## exchange rate – How to fetch the price of a token with a composed id name from the coingecko API?

When trying to fetch a token with a composed id name like: nervos-network or crypto-com-chain,
it’s not clear how to extract the price when the ‘-‘ it’s in the way of a simple fetch call.

Code example:

``````
let cryptosUrl='https://api.coingecko.com/api/v3/simple/price?ids=bitcoin%2Cnervosnetwork&vs_currencies=usd'

let cryptos;

function getCryptos() {
fetch(cryptosUrl)
.then(blob => blob.json()) // First Promise as a Blob
.then((res) => {
cryptos =  res;
console.log('2.response',  cryptos.nervos-network) // How to fetch that result?
})
.catch(err => console.log('>> There was an error <<', err)); // In case of Error
}
``````

## functional programming – Doesn’t “Always test through the public interface” contradict testing of individual composed functions?

It’s not the size of the function. It’s how it’s used.

Let’s take some well tested functions, `+ - *` and `Math.sqrt()`, and compose them into a distance function:

``````function getDistance(xA, yA, xB, yB) {
var xDiff = xA - xB;
var yDiff = yA - yB;

return Math.sqrt(xDiff * xDiff + yDiff * yDiff);
}
``````

kirupa – using the pythagorean theorem to measure distance

All these little functions have been tested. This code follows the well proven pythagorean theorem. So we’re good right?

Well, no. Because we happen to know that the inputs 59.3293371,13.4877472 to 59.3225525,13.4619422 are supposed to give us 1.6.

The problem wasn’t with the little functions, or how they were composed. It was how they got used. The pythagorean theorem works with cartesian coordinates in a two dimensional plane. Not with latitude and longitude on the curved surface of the earth. We can sometimes catch errors like this by testing against expected results. But those expected results can’t always be pushed down into the smaller functions.

Some might think of this as integration testing. I still think of it as unit testing. Because a unit isn’t a class, or a function. A unit is a testable, deterministic, side effect free, chunk of code. Syntax doesn’t decide what it’s boundaries are. Structure doesn’t decide what it’s boundaries are. Behavior does.

Here are some unit testing rules that might help make this clear.

A test is not a unit test if:

• It talks to the database
• It communicates across the network
• It touches the file system
• It can’t run at the same time as any of your other unit tests
• You have to do special things to your environment (such as editing config files) to run it.

Michael Feathers – A Set of Unit Testing Rules

Notice nothing was said about functions, classes, packages, objects, or procedures. Your code structure is not the issue here. It’s about behavior.

So I think of a unit as any chunk of code that you can carve out to test, so long as it follows these rules.

Does this mean every function must have tests written against it? No. Every function should be tested against how it’s used. Private functions have a limited use so they can be tested by testing the public functions that use them. Make that use wide spread though and you’re going to need more testing.

Focus on use and behavior.

## c# – Altering the state of a composed class by its composing classes. Is there any rule or principle for not doing so?

In other words, is it good to define the method that removes an element in a collection inside the class representing the element, considering a composition relationship?

Something like: `listElement.Delete();`

In the following example, I’m refactoring the code by creating additional classes which are taking over some of the responsibilities of the main class(`Geometry`). Hopefully, following the SoC principle.

Notes:

• The `Geometry` class has fields(`nodes` and `radii`) that holds the data that is being interpreted into abstract objects such as Point, Arch or Line.
• Classes `Point`, `Arch` and `Line` inherit from abstract class `GeoEntity` which has a dependency on `Geometry` class using dependency injection on it’s constructor.

### Before refactoring

``````public class Geometry
{
private List<Vector2> nodes;

public void DrawLine() { // Do the magic.}
public void InsertPoint() { // Do the magic.}
public void InsertArch() { // Do the magic.}

public void TranslateNode(double dx, double dy) { // Do the magic.}
public void TranslateLine(double dx, double dy) { // Do the magic.}

public void RemoveNode(int index) { // Do the magic.}
public void RemoveLine(int index) { // Do the magic.}
public void RemoveArch(int index) { // Do the magic.}

public void DoSpecialNodeRelatedAction1() { // Do the magic.}
public void DoSpecialNodeRelatedAction2() { // Do the magic.}
public void DoSpecialLineRelatedAction(double someValue) { // Do the magic.}
}
``````

### After refactoring

``````public class Geometry
{
private List<Vector2> nodes;

public Geometry.Point() Points { get => // Get them magically. }
public Geometry.Line() Lines { get => // Get them magically. }
public Geometry.Arch() Arches { get => // Get them magically. }

public void DrawLine() { // Do the magic.}
public void InsertPoint() { // Do the magic.}
public void InsertArch() { // Do the magic.}

public abstract class GeoEntity
{

protected GeoEntity(Geometry geometry, int index)
{
this.geometry = geometry;
this.Index = intex;
}

public int Index { get; }

protected abstract void DoSpecificDeletion();
public void Delete()
{
DoSpecificDeletion();
geometry.nodes.Remove(Index);

var exists = radii.TryGetValue(Index, out var kvp);
}
}

public class Point : GeoEntity
{
internal Point(Geometry geometry, int Index) :
base(geometry, index) {}

protected override void DoSpecificDeletion() { // Do the magic.}
public void Translate(double dx, double dy) { // Do the magic.}
public void DoSpecialAction1() { // Do the magic.}
public void DoSpecialAction2() { // Do the magic.}
}

public class Line : GeoEntity
{
internal Line(Geometry geometry, int Index) :
base(geometry, index) {}

protected override void DoSpecificDeletion() { // Do the magic.}
public void Translate(double dx, double dy) { // Do the magic.}
public void DoSpecialAction(double someValue) { // Do the magic.}
}

public class Arch: GeoEntity
{
internal Arch(Geometry geometry, int Index) :
base(geometry, index) {}

protected override void DoSpecificDeletion() { // Do the magic.}
}
}
``````

The refactoring in this case should enforce the SoC principle resulting into a cleaner structure with multiple smaller classes, each responsible to alter the data in `Geometry` class in their specific way, rather than having all methods defined into `Geometry` class.

A possible issue that I found is shown in the example:

``````void GeometryConsumingMethod(Geometry geometry)
{
var a = geometry.Points(0);
var b = geometry.Points(0);

a.Delete();
a.DoSpecialAction1();    // Possible logical error.
b.DoSpecialAction1();    // Possible logical error.
}
``````

However, I’m not sure if this is acceptable or not from an OOP perspective.

I’m curious what else could be wrong with this approach.

## measure theory – Let a set X be composed of a union of it’s disjoint subsets. Show their union is a sigma-algebra.

This is exercise 15 from section 2B of Sheldon Axler’s Measure Theory, Integration, and Analysis, available fo free here.

Suppose $$X$$ is a set and $$E_1 , E_2 ,…$$ is a disjoint sequence of subsets of $$X$$ such that $$cup^infty_{k=1} E_k = X$$.

Let $$S = { cup_{kin K} E_k | K subset Z_+ }$$.

Show that $$S$$ is a $$sigma$$ – algebra of $$X$$.

How do I show that $$emptyset in S$$?

Normally, I would simply take the zeroth union and consider that to be the empty set. However, the wording of this question makes it seem like I can only consider elements of $$S$$ that are at least 1 union (hence the $$Z_+$$ ).

If that is indeed the case, then by my count there’s no guarantee that the empty set is in $$S$$ since that would be assuming that one of those disjoint subsets is the empty set itself.

I suppose you could consider $$emptyset subset K$$ but that’s not one of the sets that’s given in our sequence, so I am uncomfortable with that as a solution.

I’m asking this question because this section is super important and I need clarity in order to continue with this book (which is quite good). Please be kind, I’ve been struggling diligently.

## r – ggplot: how to add legend to a plot composed of several geom_ribbon() and geom_line()?

Question: how can I add a legend to this specific plot?

I have

The legend should include:

`nd\$y_fem` – the blue line – should be in the `legend` as “5-yrs probability of death”

`nd\$y_tre` – the red line – should be in the `legend` as “3-yrs probability of death”

`nd\$y_et` – the green line – should be in the `legend` as “1-yr probability of death”

Preferably, the `legend` should include both the `line` and the `fill`.

How can this be done?

``````ggplot(nd, aes(x=n_fjernet))  +
geom_ribbon(aes(ymin = y_tre, ymax = y_fem), alpha = .15, fill="#2C77BF") +
geom_line(aes(y=y_fem), size=3, color="white") +
geom_line(aes(y=y_fem), color="#2C77BF", size=.85) +

geom_ribbon(aes(ymin = y_et, ymax = y_tre), alpha = .15, fill="#E38072") +
geom_line(aes(y=y_tre), size=3, color="white") +
geom_line(aes(y=y_tre), color="#E38072", size=.85) +

geom_ribbon(aes(ymin = 0, ymax = y_et), alpha = .15, fill="#6DBCC3") +
geom_line(aes(y=y_et), size=3, color="white") +
geom_line(aes(y=y_et), color="#6DBCC3",  size=.85) +

scale_x_continuous(breaks = seq(0,10,2), limits=c(0,10))
``````

My data

``````nd <- structure(list(y_et = c(0.473, 0.473, 0.472, 0.471, 0.471, 0.47,
0.47, 0.469, 0.468, 0.468, 0.467, 0.467, 0.466, 0.465, 0.465,
0.464, 0.464, 0.463, 0.462, 0.462, 0.461, 0.461, 0.46, 0.459,
0.459, 0.458, 0.458, 0.457, 0.456, 0.456, 0.455, 0.455, 0.454,
0.453, 0.453, 0.452, 0.452, 0.451, 0.45, 0.45, 0.449, 0.449,
0.448, 0.447, 0.447, 0.446, 0.446, 0.445, 0.445, 0.444, 0.443,
0.443, 0.442, 0.442, 0.441, 0.44, 0.44, 0.439, 0.439, 0.438,
0.438, 0.437, 0.436, 0.436, 0.435, 0.435, 0.434, 0.433, 0.433,
0.432, 0.432, 0.431, 0.431, 0.43, 0.429, 0.429, 0.428, 0.428,
0.427, 0.427, 0.426, 0.425, 0.425, 0.424, 0.424, 0.423, 0.423,
0.422, 0.421, 0.421, 0.42, 0.42, 0.419, 0.419, 0.418, 0.417,
0.417, 0.416, 0.416, 0.415), y_tre = c(0.895, 0.894, 0.894, 0.893,
0.893, 0.893, 0.892, 0.892, 0.891, 0.891, 0.89, 0.89, 0.889,
0.889, 0.889, 0.888, 0.888, 0.887, 0.887, 0.886, 0.886, 0.886,
0.885, 0.885, 0.884, 0.884, 0.883, 0.883, 0.882, 0.882, 0.881,
0.881, 0.881, 0.88, 0.88, 0.879, 0.879, 0.878, 0.878, 0.877,
0.877, 0.876, 0.876, 0.875, 0.875, 0.875, 0.874, 0.874, 0.873,
0.873, 0.872, 0.872, 0.871, 0.871, 0.87, 0.87, 0.869, 0.869,
0.868, 0.868, 0.867, 0.867, 0.866, 0.866, 0.865, 0.865, 0.865,
0.864, 0.864, 0.863, 0.863, 0.862, 0.862, 0.861, 0.861, 0.86,
0.86, 0.859, 0.859, 0.858, 0.858, 0.857, 0.857, 0.856, 0.856,
0.855, 0.855, 0.854, 0.854, 0.853, 0.853, 0.852, 0.852, 0.851,
0.851, 0.85, 0.85, 0.849, 0.848, 0.848), y_fem = c(0.974, 0.974,
0.973, 0.973, 0.973, 0.973, 0.973, 0.973, 0.972, 0.972, 0.972,
0.972, 0.972, 0.971, 0.971, 0.971, 0.971, 0.971, 0.971, 0.97,
0.97, 0.97, 0.97, 0.97, 0.969, 0.969, 0.969, 0.969, 0.969, 0.968,
0.968, 0.968, 0.968, 0.968, 0.967, 0.967, 0.967, 0.967, 0.967,
0.966, 0.966, 0.966, 0.966, 0.966, 0.965, 0.965, 0.965, 0.965,
0.965, 0.964, 0.964, 0.964, 0.964, 0.963, 0.963, 0.963, 0.963,
0.963, 0.962, 0.962, 0.962, 0.962, 0.961, 0.961, 0.961, 0.961,
0.961, 0.96, 0.96, 0.96, 0.96, 0.959, 0.959, 0.959, 0.959, 0.958,
0.958, 0.958, 0.958, 0.957, 0.957, 0.957, 0.957, 0.957, 0.956,
0.956, 0.956, 0.956, 0.955, 0.955, 0.955, 0.955, 0.954, 0.954,
0.954, 0.954, 0.953, 0.953, 0.953, 0.952), n_fjernet = c(0, 0.1,
0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1, 1.1, 1.2, 1.3, 1.4,
1.5, 1.6, 1.7, 1.8, 1.9, 2, 2.1, 2.2, 2.3, 2.4, 2.5, 2.6, 2.7,
2.8, 2.9, 3, 3.1, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7, 3.8, 3.9, 4,
4.1, 4.2, 4.3, 4.4, 4.5, 4.6, 4.7, 4.8, 4.9, 5, 5.1, 5.2, 5.3,
5.4, 5.5, 5.6, 5.7, 5.8, 5.9, 6, 6.1, 6.2, 6.3, 6.4, 6.5, 6.6,
6.7, 6.8, 6.9, 7, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9,
8, 8.1, 8.2, 8.3, 8.4, 8.5, 8.6, 8.7, 8.8, 8.9, 9, 9.1, 9.2,
9.3, 9.4, 9.5, 9.6, 9.7, 9.8, 9.9)), row.names = c(NA, -100L), class = c("data.table",
"data.frame"))
``````

## duplication – Composed primary keys : “should” duplicates and null values be accepted, and according to which norm?

I just passed some SQL test including a question about the possibility to have duplicates in a primary key column.
The expected answer was NO. Mine is, “Yes, if the column is part of a composed primary key”.

I did some testing to check that I was right, and my MySQL has no problem with this at all.

Null values, on the other hand, weren’t accepted. Only converted to 0 if using non-strict mode

I tried to check for the official norm, which I assume would be the iso specification… but it’s not free : https://www.iso.org/obp/ui/fr/#iso:std:iso-iec:9075:-1:ed-5:v1:en

I’m not sure using columns allowing duplicates as part of primary keys make much sense, but fact is, it’s possible.

Is what I’m seeing with MySQL specific to that DBMS?

I don’t have any other DBMS at the current time, but would be interested to know if SQL Server, PostgreSQL or Oracle Database have stricter restrictions about this.

## In C++, does it make sens to have library project be composed of other libraries?

I’m working on a C++ project which is currently divided into “sub modules”https://softwareengineering.stackexchange.com/”components”. Each of these are compiled into a separate library (components are usually 10-20 files).
The libraries are linked to tests which ensure that each component works as expected.

I’ve now started to work on the “main” part of the project that is using all those different components. The problem is, that in the end I want to ‘ship’ this project as a dynamic library.
I’m however running into problems linking libraries to a library. I’m not sure if this is because I am doing something wrong with my tools or if this is simply not possible.

As such, my question is:

Does my approach of having separate components be libraries so I can easily develop and test them individually make sense in C++, given that I want to deliver this project as a library itself?