web part – SPFX radio button click to update label value

Please help, I have SPFX no framework web part 2019. I am having trouble automatically calling a function when the radio/checkbox button is clicked within the form. Here is a radio button within the render method:

public render(): void {
this.domElement.innerHTML = `
<div class="${styles.orderform}">
<div class="${styles.container}">
<div class="${styles.row}">
<div class="${styles.column}">
<span class="${styles.title}"></span>
<div id="row">
<div  class="col-50 idSubscriptionption">
  <input type="radio" value="Subscription" id="idSubscriptionption" onchange="handleClick() name="group1">
  <label id="Subscription"></label><br>
</div>

The eventhandler nor the function does not receive any onchange trigger

 private setButtonsEventHandlers(): void { 
    const webPart: OrderformWebPart = this; 
    this.domElement.querySelector('button.create-Button').addEventListener('click', () => { webPart.SaveItem(); });     
    this.domElement.querySelector('idSubscriptionption').addEventListener('click', () => { handleClick(); });           
 }

function handleClick() {
let elsub = <HTMLInputElement>document.getElementById('idSubscriptionption');
if(elsub.checked)
{
 document.getElementById('Subscription')(value)='Subscription chosen'   
 alert('Subscription selected');
}
else
{
document.getElementById('Subscription')(value)='No Subscription'
}
}

google sheets – How to include a formula as part of the criterion when using COUNTIF?

I am looking to count how many values are higher than the average of the values listed.

A
1 50
2 40
3 30
4 20
5 10

Total: 150
Average: 30

=COUNTIF(A1:A5,">30")
         Answer: 2

How could I replace the “>30” with a formula?

I tried the following with no success:

=COUNTIF(A1:A5,""">"&AVERAGE(A1:A5)&"""")

=COUNTIF(A1:A5,textjoin("",TRUE,""">",AVERAGE(A1:A5),"""")

Problem with a foreign key referencing a primary key which is also part of an unique key (MariaDB)

I created a database using MariaDb. I have the following author table (with id as primary key), which is extended by two other tables, author_personal and author_corporate, representing two different kinds of author and having different fields:

CREATE TABLE `author` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `author_type` char(1) NOT NULL,
  -- other fields
  PRIMARY KEY (`id`),
  UNIQUE KEY `author_UN` (`id`,`author_type`) USING BTREE,
  CONSTRAINT `author_CHECK_type` CHECK (`author_type` in ('P','C'))
);

CREATE TABLE `author_personal` (
  `id` int(10) unsigned NOT NULL,
  `surname` varchar(30) DEFAULT NULL,
  `name` varchar(30) DEFAULT NULL,
  -- other fields
  `author_type` char(1) GENERATED ALWAYS AS ('P') VIRTUAL,
  PRIMARY KEY (`id`),
  KEY `author_personal_FK` (`id`,`author_type`),
  CONSTRAINT `author_personal_FK` FOREIGN KEY (`id`, `author_type`) REFERENCES `author` (`id`, `author_type`) ON DELETE CASCADE
);

CREATE TABLE `author_corporate` (
  `id` int(10) unsigned NOT NULL,
  `corporate_name` varchar(50) DEFAULT NULL,
  `corporate_acronym` varchar(5) DEFAULT NULL,
  `author_type` char(1) GENERATED ALWAYS AS ('C') VIRTUAL,
  PRIMARY KEY (`id`),
  KEY `author_corporate_FK` (`id`,`author_type`),
  CONSTRAINT `author_corporate_FK` FOREIGN KEY (`id`, `author_type`) REFERENCES `author` (`id`, `author_type`) ON DELETE CASCADE
);

Although author.id would be enough, I decided to create an UNIQUE KEY (id, author_type) to be referenced by the foreign key in the two other tables, so that it would be impossible to reference from author_personal an author which is not flagged as P, and from author_corporate an author which is not flagged as C.

The problem rises when I want to reference author using the primary key, like in this table:

CREATE TABLE `work_authors` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `work_id` int(10) unsigned NOT NULL,
  `author_id` int(10) unsigned NOT NULL,
  -- other fields
  PRIMARY KEY (`id`),
  KEY `work_authors_FK` (`work_id`),
  KEY `work_authors_FK_author` (`author_id`) USING BTREE,
  CONSTRAINT `work_authors_FK` FOREIGN KEY (`work_id`) REFERENCES `work` (`id`) ON UPDATE CASCADE,
  CONSTRAINT `work_authors_FK_author` FOREIGN KEY (`author_id`) REFERENCES `author` (`id`) ON UPDATE CASCADE
);
    

Both DBeaver and PhpMyAdmin think that work_authors_FK_author references author_UN instead of the actual primary key (author.id).

DBeaver Foreign Key description

This means that in DBeaver I am not able to click on the values in work_authors.author_id and open the referenced record in authors because I get the following error:

Entity (davide_library.author) association (work_authors_FK_author) columns differs from referenced constraint (author_UN) (1<>2)

I created the foreign keys using DBeaver. Although I selected PRIMARY as the unique key, in the list of foreign keys it always show author_UN as the referenced object. I don’t know if this behavior depends on MariaDB or DBeaver.

My question is:

Is there any way to explicitily reference in the DDL the primary key instead of the unique key?

or, alternatively:

Is there a different way, other than the UNUQUE constraint, to check that author_personal references only authors flagged with P, and the same for author_corporate?

react – Programatically add “Web part gallery” in a custom web part in SharePoint Online

I have a custom web part that have different tabs. In two of the tabs/pivots I want the user to be able to add a web part from the web part gallery and to do so I need either to programatically add the web part gallery in my code, or add the specific out of the box web parts that needs to be shown (in this case the Stream web part and text web part). Is this possible?

real analysis – Does a sequence of Jacobians converge to the ‘correct’ continuous part plus some controlled singular part?

$newcommand{M}{mathcal{M}}$
$newcommand{N}{mathcal{N}}$

Let $M,N$ be two-dimensional smooth, compact, connected, oriented Riemannian manifolds. (with or without boundaries). Let $f_n in W^{1,2}(M,N)$ satisfy $Jf_n > 0$ a.e., and suppose that $f_n rightharpoonup f$ in $W^{1,2}(M,N) $.

Let $Jf_n dx$ be the measure on $M$ associated with the function $Jf_n$, where by $dx$ I refer to the standard Riemannian volume form on $M$. Suppose that the sequence of measures $Jf_n dx$ is uniformly bounded, i.e. $sup_{n in infty}int_{M} Jf_n dx<infty$.

Does $Jf_n dx $ weak star converge to $ Jf dx + V(mathcal{N})sum_{iin I} a_i delta_{x_i}$, where $I$ is some finite set with $a_i in mathbb{Z} setminus {0}$ and $x_i in mathcal{M}$?

(since $Jf_n ge 0$ as as measures, we in fact must have $a_i > 0$.)


I think that it follows from known results in geometric measure theory (currents), but unfortunately I am not fluent in that language so I am not sure. Is there any reference for that claim?

If I understood correctly, the idea should be as follows:

Since $Jf_n dx $ is a bounded sequence of non-negative measures, it converges to some measure $mu$. Now, a general structure theorem for currents implies that $mu$ can be decomposed as a sum of a continuous part (which should then must be $ Jf dx$ for some reason?) and a singular part of the form $V(mathcal{N})sum_{iin I} a_i delta_{x_i}$.


I think that I found such a structure theorem– this is “Theorem 1 (Structure Theorem)” in the book “Cartesian currents in the calculus of variations”, by Giaquinta, Mariano, Modica, Guiseppe, Soucek, Jiri (Volume II, pg 363).

This theorem assumes $M$ to be an Euclidean domain, and $N$ is the two-sphere; I am not sure whether it holds verbatim for other manifolds.

Is the reasoning above correct?

Cascading dropdown list in SPFx web part

You can do this using SPServices

$().SPServices.SPCascadeDropdowns({
relationshipList: "States",
relationshipListParentColumn: "Country",
relationshipListChildColumn: "Title",
parentColumn: "Country",
childColumn: "State",
debug: true
});

You can check the example in the below link

https://social.technet.microsoft.com/wiki/contents/articles/37676.sharepoint-2013-cascade-dropdown-list-using-spservices-spservices-spcascadedropdowns.aspx

The below image is very helpful to avoid confusion with using it

enter image description here

scanning – Why is there heavy dust and scratches only on the darker part of scanned color prints?

I was scanning my color prints and while postprocessing them, after descreen, I had noticed a trend that there were always a lot of dust and scratches on the darker part of the image, while the brighter areas didn’t have any problem. I’m not sure what they are, I’m guessing dust or paper texture/scratches? Why is that? Is there something wrong with my scanner?

sharepoint enterprise – Need help with layout using custom CSS and JS. Want to lock down page and have web part zones at certain points

Apologies for the awful title.

I’ll try to explain this.

I have a standard page which was blank. On this I added some custom CSS and JS and make an accordion.
I am using this as a base.

The first problem I run into is if I edit the page and make changes (having to expand the accordion to get to certain bits of text) the accordion breaks and jams. It stays open and no longer works, so I have to roll back the page.

The only way I got around this was using Designer to just add text in. This causes more troubles as I’m not a coder. So when I try to add formatting, bold, tables or bullets in Designer, it’s all greyed out. Sometimes it isn’t. So confusing.

I suppose what would be handy perhaps is to have the accordion “locked” on the page. Then in each of the sections it has a web part zone. Then I could add my own custom parts when I needed them, without fear of the accordion breaking. If I need to change titles or icons, I could dive into designer to make those minor changes.

Any advice and help here is appreciated.

performance tuning – Replacing an part in a dataset

I have a nested dataset, which has been created to be the correct size from using Table, and I wish to replace parts of the dataset with the data. My dataset would look like (with placeholder values for the number of repeats etc, so that you can run the code yourself):

NumberOfRepeats = 3;
AllParameters = {58422.1`, 58427.6959583`, 58433.2919159`, 
   58438.8878726`, 58444.4838286`, 58450.0797838`, 58455.6757383`, 
   58461.2716919`, 58466.8676448`, 58472.4635969`, 58477.1595489`, 
   58480.3555014`, 58483.5514534`, 58486.7474049`, 58489.9433561`, 
   58493.1393068`, 58498.135255`, 58503.7312019`, 58509.3271479`, 
   58514.9230932`};
NumberOfExperiments = 5;
NumberOfWaveforms = 2;

NumberOfBlocks = 20;

(*Create dataset for data*)
Blocks = Table(<|"Block Number" -> i, 
    "Parameters" -> <|"Freq" -> AllParameters((i))|>, 
    "Waveforms" -> 
     Table(<|"Waveform Type" -> j, 
       "Repeats" -> 
        Table(<|"Repeat number" -> k, "Spectrum" -> {}, 
          "Fitting Parameters" -> {}|>, {k, NumberOfRepeats}), 
       "Fitting Parameter" -> {}, "Spectral Splitting" -> {}|>, {j, 
       NumberOfWaveforms})|>, {i, NumberOfBlocks});

DatasetRawDataAnomaly = 
 Table(<|"Experiment" -> i, "Data" -> Blocks|>, {i, 
    NumberOfExperiments}) // Dataset

My issue is wanting to replace an element of that dataset. I have tried using ReplacePart, but I need to make so many changes, that it is too slow, taking up to half an hour with the code I am running this in. An example of what I would do is:

DatasetRawDataAnomaly = ReplacePart(DatasetRawDataAnomaly,
   {ExperimentType, "Data", BlockType, "Waveforms", WaveformType, 
     "Repeats", RepeatCounter, "Spectrum"} -> IntensityType);

Where IntensityType is a list of lists of values of the form {{1,2},{3,4},{5,6}…}

I think running something like:

IntensityType = {{1, 2}, {2, 3}, {3, 4}};

DatasetRawDataAnomaly(1, "Data", 1 , "Waveforms", 1, "Repeats", 1 , 
{(<|#, "Spectrum" -> {IntensityType}|> &)}) (*Ones for: Experiment number, 
Block 1, Waveform 1, Repeat 1*);

would be quicker, but is there a way to change the whole dataset, rather than extracting the part and changing it?

Thank you for any help you can provide.

list manipulation – Remove a part from all sublists

You seem to have a problem that Excel puts some errors containing string expressions. If they appear in random places but you know how they look you can delete them based on their pattern. After you import:

data=ToExpression[Import["https://pastebin.com/raw/rSVRyTge"]];

you can remove them whether by exact pattern

DeleteCases[data,{"",-6 ""},Infinity]

or some generalization

DeleteCases[data,{___,_String,___},Infinity]