webforms – How can I enforce a default machine name prefix?

By adding this code to a module, administrators can create forms that will can be exported to config while other users with permissions to create webforms will automatically have the machine name set with the site_ prefix.

function webform_create_validation(&$form, FormStateInterface $form_state) {
  $current_user = Drupal::currentUser();
  // If user creating the webform is not an administrator.
  if (!in_array('administrator', $current_user->getRoles())) {
    // Get machine name of webform being created.
    $ent = $form_state->getFormObject()->getEntity();
    if (isset($ent)) {
      $id = $ent->id();
      // Check to see if webform machine name starts with "site_".
      // If it does not, add it and save.
      if (substr($id, 0, 5) !== "site_") {
        $new_id = 'site_' . $id;
        $form_state->setValue('id', $new_id);
      }
    }
  }
}

machine learning – What if the samples in the dataset are fairly naturally similar to each other, would that be considered as data redundancy?

I am working on building ML/DL solution for a problem where that data is considered, naturally similar and I am worried if that would be considered as data redundancy. My question is, is that so? and if yes what can I do about this then, since this is a feature of the actual real data?

machine learning – What do we mean by permissible transformations in types of attributes-:nominal,ordinal,interval,ratio?

I am studying data mining and I stumbled upon types of attributes.

They are

  1. Nominal

  2. Ordinal

  3. Interval

  4. Ratio

Data mining book by Tan,Steinbech,Kumar says
Permissible transformations for-:

  1. nominal-: any one to one mapping, eg a permutation of values.

  2. Ordinal-: new_value=f(old_value). An order preserving change of values.

  3. Interval-: new_value=a*old_value+b

  4. Ratio-: new_value=a*old_value

I tried making sense of this, but could not really make sense what is this trying to say.

What I know?

  1. Nominal attributes-:

It provides enough information to distinguish one object from another.

eg-: gender, zipcodes, employee id numbers, jersey number of players.

Here we can’t average the jersey number of player values and find something meaningful.

The numbers even they are integers, no mathematical operations can be performed except =,≠.

  1. Ordinal-:

Ordidnal values provide enough information to order the objects.

eg-: grades, {good,better,best}

Operation <,>

If ram’s percentage=90%, mohan’s percentage=45%, we can’t say Ram is 2 times good as Mohan.

  1. Interval-:

Here the difference between values are meaninful. There is no absolute 0 so we can’t take the ration of 2 measurements. eg-: temperature in celsius, fahrenheit, time of day etc.

We can’t say 10 AM is twice as long as 5 AM. But we can say

0-10 AM interval=10 hrs

0-5 AM interval=5 hrs

We can say interval between 0-10 AM is twice as long as interval between 0-5 AM. This is because 0 AM doesn’t mean absence of any time.

When we say 0 F we don’t mean zero heat.

Also, 100 degree F isn’t twice as 50 degree F.

  1. Ratio-:

For ratio variables, both differences and ratio are meaninful. Eg-: temperature in kelvin,mass, monetary quantities etc. It has meaningful zero point. If someone’s income is 0, then income is 0 unlike 0 AM which means 12 midnight.

unity3d – State machine – how to handle outside environment values?

I’ve got a state machine implementation in Unity that I’m working on (C#), the plan being that it will be mostly used for AI related things.

I’m not sure how I should deal with various “inputs” / how it should interact with knowledge from the outside environment. Two approaches I’ve considered and tried so far:

1, I have a dedicated “Query” class that holds various bools. At the end of the Tick() method in each state I make some checks like

if (queries.JumpUp) { SetState(JumpState); }

that take care of switching states. To change states, I simply set the bool to true.
This seems to work fine, creates a very loose relationship between the “query” and the resulting behaviour, and lets me place pretty much all the transition logic in a dedicated method (my base Tick() method calls an CheckForTransitions() method at its end, and it’s this method that I override and put all the transition logic in).
It works fine so far, but I’m a bit worried that this type of logic might be almost a bit too loose. I think it might be somewhat similar to the blackboard design pattern. It also feels a bit like an observer pattern, which could be useful – I might have multiple different types of state machines active at the same time. A “Query” class that has booleans like I mentioned seems like a very natural way of implementing a general interface layer that will allow for loose “communication” between the layers etc.

2, Create virtual methods for all possible “events” I would like to be handled.

public override void TryJump() { SetState(JumpState); }

To change states, I explicitly call the method above (or some wrapper around it, implemented inside the StateMachine).

This also seems to work fine. Some slight negatives I can see compared to 1:
With approach 1, there’s no issues if I choose to update my state machine independently of the game loop. With approach 2, there could be multiple calls that result in a transition between two state machine ticks/updates. This could be fixed by the making calls like “TryJump()” change some buffer variable that would just hold the desired next state, and the actual transition would perhaps happen at the end/start of the state machine update (rather than “TryJump()” causing an immediate change in states).
But at that point I’m getting very close to the approach used in 1, so I’d think I might as well just use that.

I don’t have a single nice place in the code where I can check exactly what sort transitions can happen and what their conditions are.
I will have to have tons of virtual methods for each state, for each “TryJump” type event. If I don’t want to work with a direct reference to the state machine, I will have to also write wrappers, that will call those TryJump events the currently active StateMachines – so something like say:

public void TryJump() 
{ 
 foreach (StateMachine stateMachine in ActiveStateMachines) 
     stateMachine.currentState.TryJump(); 
}

Despite that, something just feels a bit off about the method in 1, – using booleans like that just seems a bit weird. I’m also not terribly comfortable with event based approaches, and it feels a bit “wrong” to use a special layer with booleans etc., when I could just make a direct call that does what I want using approach 2.

machine learning – Finding the disagreement coefficient for certain hypothesis classes and distributions

I need help with the following question:

First some definitions,
Let $cal X subseteq mathbb R^d$ be some example domain and let $cal H$
be a hypothesis class on a distribution $cal D_cal X$ over $cal X$.
Let $hincal H$ be some hypothesis. A ball with radius $epsilon$ around $h$ is the set of all hypotheses that are at most $epsilon$ “different” than $h$:
$B(h,epsilon)={h’incal H | Pr_{xsim cal D_cal X} (h(x)neq h'(x))leq epsilon}$
The disagreement region is the set of all examples $xincal X$ such that not all hypotheses agree on:
$DIS(cal H) = {xincal X | exists h_1, h_2in cal H, h_1(x)neq h_2(x)}$
And finally, the disagreement coefficient for a hypothesis class $cal H$ is defined as follows:
$theta(cal H, cal D_cal X)=max_{hincal H}theta_h$
where $theta_h=frac{Pr_{xsim cal D_cal X}(xin DIS(B(h,epsilon)))}{epsilon}$
With that in hand the problem is:
Given a hypothesis class of Axis Parallel Rectangles:
$cal H_d = {h_{a,b}| a,binmathbb R^d}$ where $forall xin cal X :h_{a,b}(x) = 1$ iff $forall 1leq ileq d: x(i)in(a(i),b(i))$
Which can also be written as $h_{a,b}(x)=Pi_{i=1}^d mathbb I(x(i)in(a(i),b(i)) ).$
a) We consider a distribution $cal D_cal X$ which is uniform on $(0,1)^d$

b) $cal H’_d = {h_{a,b}in H_d| forall 1 leq i leq d: a(i),b(i)in mathbb Z, a(i)<b(i) }$,
$cal D_cal X$ is a uniform distribution on $(0,N)^d$ for some $Ninmathbb Z$.

There is more to this question but I’m only interested in the second part.

The first part is rather easy since if we take $a=b=0$, then $h_{0,0}incal H_d$, but the ball around $h_{0,0}$ is the set of all hypotheses $h_{a,b}$ such that the volume of the $d$– dimensional rectangle defined by $a$ and $b$ is at most $epsilon$. And point $xinmathbb R^d$ can be enclosed in such a rectangle, so $DIS(B(h_{0,0},epsilon))=cal X$, hence $Pr_{xsim cal D_cal X}(xin DIS(B(h_{0,0},epsilon)))=1$ so $theta_h=sup_{epsilonin(0,1)} frac{1}{epsilon}=infty$ which makes $theta(cal H,cal D_cal X)=infty$.

But what happens if no such hypothesis exists? Then to find the set of all hypothesis that disagree on $epsilon$-mass of the examples (i.e. a ball around $h_{a,b}$) in is not sufficient to extend the rectangle by adding $epsilon$ volume to it since you can perhaps move the rectangle in a direction that will also make two hypotheses disagree on $epsilon$ mass(probability of disagreement is $epsilon$).

Any help with calculating this for part 2?

Thanks in advance.

virtual machine – Running 16Bit Application on Windows 10×64 without VT-x

I have a legacy application (exe header = “MZ”) which I usually run through VMWare workstation or the 32-Bit version of Windows. Now I got at a computer having a Core 2 Duo processor not supporting VT-x, and Windows 10×64, so VMWare won’t run and the 16-Bit emulation built into WIndows x32 isn’t available either.

I could try to swap the processor for a model which does support VT-x … but before I go after that … is there any other means which I can try to get this software running on Windows 10×64?

Thnx, Armin.

windows patching – CBS error on 2012 r2 machine

Installation of Hotfix fails with the below error message .

2021-07-15 23:07:07, Error CSI 00000012 (F) STATUS_OBJECT_PATH_NOT_FOUND #85609# from Windows::Rtl::SystemImplementation::CBufferedRegistryProvider::SysOpenKey(flg = 0, key = {provider=NULL, handle=0, name= (“null”)}, da = (KEY_ALL_ACCESS|ACCESS_SYSTEM_SECURITY), oa = @0x3e029dbf70->OBJECT_ATTRIBUTES {s:48; rd:NULL; on:(160)”RegistryMachineSoftwareMicrosoftWindowsCurrentVersionSideBySideWinnersamd64_c70847874b337fa3a84bdc36a8952e9f_31bf3856ad364e35_none_aed515309a98349a6.3″; a:(OBJ_CASE_INSENSITIVE)}, disp = Unmapped disposition: 43892976 (0x029dc0f0))(gle=0xd000003a)
2021-07-15 23:07:07, Error CSI 00000013@2021/7/15:15:07:07.230 (F) basewcpsilreg_buffered.cpp(500): Error STATUS_OBJECT_PATH_NOT_FOUND originated in function Windows::Rtl::SystemImplementation::CBufferedRegistryProvider::SysOpenKey expression: (null)
(gle=0x80004005)

Also validated that the registry key does not exist

SoftwareMicrosoftWindowsCurrentVersionSideBySideWinnersamd64_c70847874b337fa3a84bdc36a8952e9f_31bf3856ad364e35_none_aed515309a98349a6.3

How can we fix this ?

machine learning – How to chose the probability distribution and its parameters in maximum likelihood estimation

I’m reading the book “Mathematics for Machine Learning”, it’s a free book that you can find here. So I’m reading section 8.3 of the book which explains the maximum likelihood estimation (or MLE).
This is my understanding of how MLE works in machine learning:

Say we have a dataset of vectors $(x_1, x_2, …, x_n)$, we also have corresponding labels $(y_1, y_2, …, y_n)$ which are real numbers, finally we have a model with parameters $theta$. Now MLE is a way to find the best parameters $theta$ for the model, so that model would map $x_n$ to $hat{y}_n$ and $hat{y}_n$ is as close to $y_n$ as possible.

For each $x_n$ and $y_n$ we have a probability distribution $p(y_n|x_n,theta)$. Basically it estimates how likely our model with parameters $theta$ will output $y_n$ when we feed it $x_n$ (and the bigger the probability the better).

We then take a logarithm of each of the estimated probabilities and sum up all the logarithms, like this:
$$sum_{n=1}^Nlog{p(y_n|x_n,theta)}$$

The bigger this sum the better our model with parameters $theta$ explains the data, so we have to maximize the sum.

What I don’t understand is how do we chose the probability distribution $p(y_n|x_n,theta)$ and its parameters? In the book there is an Example 8.4, where they chose the probability distribution to be Gaussian distribution with zero mean, $epsilon_n sim mathcal{N}(0,,sigma^{2})$. They then assume that the linear model $x_n^Ttheta$ is used for prediction, so:
$$p(y_n|x_n,theta) = mathcal{N}(y_n|x_n^Ttheta,,sigma^{2})$$
and I don’t understand why they replaced zero mean with $x_n^Ttheta$, also where do we get covariance $sigma^{2}$?

So this is my question, how do we chose the probability distribution and it’s parameters? In the example above the distribution is Gaussian but it could be any other distribution from those that exist and different distributions have different types and numbers of parameters. Also as I understood each $x_n$ and $y_n$ have its own probability distribution $p(y_n|x_n,theta)$ which even more complicates the problem.

I would really appreciate your help. Also note that I’m just learning the math for machine learning and not very skilled. If you need any additional info please ask in the comments.

Thanks!

machine learning – Could someone explain the algorithm from this paper? (Thank you)

Trying to get a fair understanding of our artificial immune systems. To do this I’ve been reviewing this paper, but the algorithm and mathematics is over my head, could someone explain the below to me in simple terms. Thank you

In the case of this paper, I’ve been able to understand that the system takes the IoT-Bot dataset. Then applies feature reduction through the use of the Information Gain algorithm so to remove features with a low ranking. And I understand the feature selection approach for this paper attempts to categorize signals as one of the following: safe, danger and PAMP.

But when it comes to how the below algorithm and mathematics actually work, I’m at a loss. Any help would be very appreciated.

Currently don’t have enough points to embed below images

https://i.stack.imgur.com/0n5C7.png

https://i.stack.imgur.com/5IKcy.png

If you believe this question is inappropriately placed, please let me know where to post it and I’m happy to post it there instead. Or if this thread is missing details you believe are necessary, let me know and I’ll add those.

mount – How to create a mounting point to a shared drive from a linux subsystem on a Windows machine?

I have a Windows machine (Windows-10). There I’ve installed the Ubuntu app from Canonical Group Limited, this allows me to have a Ubuntu subsystem, which I regularly use for grep, sort and other interesting commandline tools.

Now I have created a shared drive on another machine (\other_machineLog), which contains some logfiles I’d like to analyse.

I have created two mounting points in order to access the C:-drive and the D:-drive on my PC, this is working fine:

Linux Prompt$ df -hk
Filesystem     1K-blocks      Used Available Use% Mounted on
C:            999036924 731107332 267929592  74% /mnt/c
D:            976727036   2621776 974105260   1% /mnt/d

Now I guess that, in order to access the mentioned shared directory, I need to create a mounting point towards that directory.

Does anybody know how I can do that?

Thanks

Edit

I am willing to modify the /etc/mtab file, if that is what is takes:

Linux Prompt>cat /etc/mtab
C:134 /mnt/c drvfs rw,noatime,uid=1000,gid=1000,case=off 0 0
D:134 /mnt/d drvfs rw,noatime,uid=1000,gid=1000,case=off 0 0