Integration ID is not specified or invalid when authorising

I suspect that the ID of 0 is your problem. From MagentoIntegrationControllerAdminhtmlIntegrationEdit:

$integrationId = (int)$this->getRequest()->getParam(self::PARAM_INTEGRATION_ID);
        if ($integrationId) {
            //stuff happens
        } else {
            $this->messageManager->addErrorMessage(__('Integration ID is not specified or is invalid.'));
            $this->_redirect('*/*/');
            return;
        }

I would expect PHP to interpret an integer of zero as a boolean of false, causing the condition to fail and throwing your error. I would expect any ID other than 0 to work. Since I don’t see that the integration.integration_id field is a key for any other field, you may be able to change that value right in the database and see if that helps.

numerical integration – System of Differential equations on non uniform grid

I need to solve the following set of ODE’s numerically,

$frac{dy}{dx}=Ay+Bz, \
frac{dz}{dx}=Cy+Dz$

where the independent variable $x$ is an non-uniform spaced array of points and so is A, B, C & D.

What is the best way of achieving this as common NDsolve specifies the independent variable in $ {x,x_{min},x_{max}} $ format.

continuity – Is it possible to define integration over a discontinous domain?

I’m trying to think about this from the Riemannian integration perspective so let me know if Lebesgue integration or something else is better. An example where I seem to be running into problems is with integration over the domain $mathbb{R}setminusmathbb{I}$. If you want to integrate $fcolonmathbb{R}rightarrowmathbb{R}setminusmathbb{I}$ given by

$begin{equation}
f(x) = e^{-x}
end{equation}$

over $D = (0, 4)$, you should be able to get away with Riemannian integration (even though the interval is discontinuous) since the discontinuities are negligible. I guess my intuition for this is that $f(x)Delta{x}$ will disappear in the limit $Delta{x}rightarrow0$ for these discontinuities since they are finite. In a Riemannian sum, as the number of partitions goes to infinity, the contributions (or lack thereof) of 4 pieces will make no difference. So in that sense this integral over a finite domain seems to make sense, or be something we can in a way “get away with”. But what about the integral from $0$ to $infty$? In that, we have an infinite number of missing partitions. What would the difference amount to? How do you compute or even define integration here?

numerical integration – Mathematica can’t seem to handle Truncated BinormalDistribution when there is non-zero correlation coefficient

I would like to use Mathematica to analyze (e.g., compute moments, plot, etc) a truncated bivariate normal distribution. For example:

d = BinormalDistribution({0,0},{.5,1},.5);
dTruncated = TruncatedDistribution({{-.5,Infinity},{0,2}},d)
Mean(dTruncated)

When I run this code, though, Mathematica begins evaluating and never stops (I ran it all night and nothing). I don’t get any error messages. Same when I try to plot the PDF of dTruncated or sample points from the distribution.

I’m running Mathematica v 11.2 with Windows 10.0 on a 4.6GHz Intel i9 processor with 64Gb RAM, so I don’t think it’s a processing speed issue.

The problem only seems to occur when the correlation coefficient is non-zero. When I run the same code as above but just make the correlation coefficient in BinormalDistribution = 0, it works fine:

d = BinormalDistribution({0,0},{.5,1},0);
dTruncated = TruncatedDistribution({{-.5,Infinity},{0,2}},d)
Mean(dTruncated)

This immediately spits out an answer. I have tried numerous combinations of parameter values, and it only ever works when the correlation coefficient equals 0. Unfortunately, that’s not very helpful for me.

There is an R package that does this easily (see here and here) in a few lines of code:

> library(tmvtnorm)
> mu <- c(0, 0)
> sigma <- matrix(c(.5, .5, .5, 1), 2, 2)
> a <- c(-0.5, -Inf)
> b <- c(0, 2)
> moments <- mtmvnorm(mean=mu, sigma=sigma,
> lower=a, upper=b)

Any assistance with this would be very much appreciated!

Numerical Integration seems to give wrong result

I am trying to integrate a long expression numerically. But I just noticed that something is off with the integration. I have to integrate from 1 to 0.1. When I integrate from 1 to 0.2 & 0.2 to 0.1 and add up, it gives a different result than just integrating from 1 to 0.1. Is it due to some problem in the method of integration? Which one is better in this case?

dofdata = Import(NotebookDirectory() <> "gstar.txt", "Table"); 
g = Interpolation(dofdata, InterpolationOrder -> 2, 
   Method -> "Hermite");
ratedata = Import(NotebookDirectory() <> "rate.txt", "Table");
Subscript(C, e) = 
  Interpolation(ratedata, InterpolationOrder -> 2, 
   Method -> "Hermite");
f(Nu)s = 
  Function({r, sin2(Theta), ms, Ti, Tf}, 
   Block({GF = 1.166*10^-5, 
     MPl = 1.22* 10^19, (Eta)B = 6.05*10^(Minus)10, mZ = 91.1876, 
     mW = 80.379}, {$MaxExtraPrecision = 1000};
    NIntegrate(((-Sqrt(90/(8*Pi^3))*MPl) /( 
         g(T)^(1/2)*T^3))*(1 + (T*g'(T))/(3*g(T)))*0.25*
      Subscript(C, e)(T)*GF^2*
      (( r*T^6)/
        Tf) (g(T)/g(Tf))^(1/
         3)  ((ms^2 * 
           sin2(Theta))/(2*(( r*T^2)/Tf) (g(T)/g(Tf))^(1/3)))^2*
      ( ((ms^2 * 
              sin2(Theta))/(2*(( r*T^2)/Tf) (g(T)/g(Tf))^(1/3)))^2 + 
         (Subscript(C, e)(T)*GF^2*(( r*T^6)/Tf) (g(T)/g(Tf))^(1/3))^2/
          4 + 
         (ms^2/(2*(( r*T^2)/Tf) (g(T)/g(Tf))^(1/3)) -
            ((2*(Sqrt)2*1.20206*GF*(Eta)B*T^3)/(4*Pi^2) -
              ((8 (Sqrt)2 *GF)/(3*mZ^2))*(g(T)/g(Tf))^(1/
                  3)*2*((7 *(Pi)^2 * r*T^6)/(240*Tf)) - 
              ((8 (Sqrt)2 *GF)/(3*mW^2))*(g(T)/g(Tf))^(1/
                  3)*2*((r*T^6*5.6822)/((Pi)^2 *Tf))) )^2)^(-1)*
      (1/(Exp(((r*T)/Tf) (g(T)/g(Tf))^(1/3)) + 1)), {T, Ti, Tf},
     Method -> {"AdaptiveQuasiMonteCarlo", "MaxPoints" -> 10^10}, 
     MaxRecursion -> 100)));


In(92):=
 a = f(Nu)s(1, (20.*10^-9)^(1/2), 2.*10^-6, 1., 0.2)
b = f(Nu)s(1, (20.*10^-9)^(1/2), 2.*10^-6, 0.2, 0.1)
c = f(Nu)s(1, (20.*10^-9)^(1/2), 2.*10^-6, 1, 0.1)
a + b
c

Out(92)= 0.000918702

Out(93)= 0.00122894

Out(94)= 0.00127028

Out(95)= 0.00214764

Out(96)= 0.00127028

Data files used for the integration are gstar.txt and rate.txt

SQL Server Integration Services not showing in SQL Server Configuration Manager

I have already installed the following:

  • SQL Server 2019 Developer Edition
  • SQL Server Configuration Manager
  • SQL Server Management Studio
  • Visual Studio Community 2019
  • SQL Server Data Tools

Extensions in Visual Studio 2019:

  • Microsoft Analysis Services Projects
  • SQL Server Integration Services Project

Apparently, I’m supposed to see SQL Server Integration Services running under the SQL Server Services on my SQL Server Configuration Manager but I don’t see such application running. I already have successfully installed the applications aforementioned.

I only have the following under my services on my SQL Server Configuration Manager

Am I supposed to run or start something first? So that I can see it running?

Function providing input and integration limits for NIntegrate

I am trying to define some custom function that evaluates the numeric integral of some complicated function. The problem is that I would like to include as an input the integration limits. A MWE follows

f[x_,y_]:= Exp[-2 x^2 y^2]
test[x_?NumericQ,IntL_]:= NIntegrate[f[x, y],IntL]
myIntL1= {y,-4 x^2,4x^2};
myIntL2= {y,-4 x^4,4x^4};

Then, if I evaluate for instance test[3,myIntL1] I get a problem concerning an invalid limit of integration.

Is there a clever way to fix this without defining several functions including the integration limits, such as

    test1[x_?NumericQ,IntL_]:= NIntegrate[f[x, y],{y,-4 x^2,4x^2}]
    test2[x_?NumericQ,IntL_]:= NIntegrate[f[x, y],{y,-4 x^4,4x^4}]

etc?

Of course, here everything looks simple, but in my case all the functions are rather long; some even purely numeric ones. Since I have multiple choices for the integration limits, it would be more practical to avoid defining test1[x_], test2[x_], ..., testN[x_]

Thanks in advance,
Pablo

integration – Green’s identity with different norm

Let $Omega subset mathbb{R}^n$ be a domain with a smooth boundary $Gamma$. Suppose that $f, g colon mathbb{R}^n to mathbb{R}$ are of class $C^infty( overline{Omega})$. Green’s first identity states that
$$ int_Omega nabla f cdot nabla g dOmega = int_Gamma f nabla g cdot hat{n} dGamma – int_Omega f nabla^2 g dOmega.$$

The above integrals over $Omega$ could be interpreted as integrated with respect to the Hausdorff measure induced by the euclidean norm on $mathbb{R}^n$. Integral over $Gamma$ could also be interpreted as an integral with respect to measure induced by the same norm.

Suppose that instead of using euclidean norm one were to choose some other norm $| cdot |$. Since in finite-dimensional normed spaces all norms are equivalent, terms $nabla f$, $nabla g$ and $nabla^2 g$ wouldn’t change regardless of the used norm. However, changing norm most likely would change the mentioned Hausdorff measure and the measure on the boundary, thus affecting all the integrals.

My question is: is there a similar result to the Green’s identity, but with the mentioned change of norm applied? I’m especially interested in whether such change would also require one to change the meaning of the term $cdot hat{n}$, i.e. one would need to redefine what “normal” to the boudnary means in that case.

integration tests – How to automate QA testing in legacy web apps?

I am wondering what specific best practices are around full integration testing (like by a QA team). I am a frontend developer working in a large legacy web app with very little frontend testing. We have a large monolith Rails app with React on the FE added on as an afterthought. We have a few React-based unit tests for each component, but the components often have complex interdependencies with other components and the backend (BE) or with third party services such as using the Stripe credit card react input field, or logging on the FE directly to datadog. We also have a few “global” integration tests with Cypress, but we have mocked out the BE. We mocked the BE because it was theoretically going to be too slow to run full integration tests on each pull request (PR). This limits us greatly in what we can test, as we can’t test saving records, or doing flows in the app, just basic rendering of the full page.

At previous companies I have used things like RainforestQA to write human-driven QA tests, but it is hard to scale. At my current company, we have a dedicated QA team who uses spreadsheets to write down all their manual tests they need to perform for each new feature (from a large product/feature brief), before they have the time to write automated tests. This list of manual tests is easily 50 to 100 scenarios for a new feature:

  • Login as this user with this set of roles for that company with these company features enabled.
  • Perform some sequence of actions.
  • During each step, check for certain results.

Our data model is fairly complicated and detailed, so we have users with roles and all kinds of custom settings, companies with all kinds of configurations and settings, and other objects even more complex than these. So there are lots of even basic variations to test in the integration tests (not counting the actual combinations of everything, just talking covering the app’s main features in large swaths by one sweeping integration test so to speak).

When they finally get around to writing automated tests (usually a few product briefs behind, or 6 months to a year behind when the feature was actually released, because we are short staffed), we use Selenium and Capybara on Rails to programmatically login and do whatever is necessary to automate the test. However, we skip out on testing integration with third parties, such as:

  • Checking if an email was sent (or something failed here).
  • Checking if a text message was sent to Twilio (or something failed here).
  • Checking if a payment was made to Stripe (or something failed here).
  • Checking if custom PDF invoices were generated.
  • Checking if some other third party thing happened.

In an ideal world, we would be testing these things automatically, as the QA team currently needs to manually test these before every release to make sure nothing is broken.

In an ideal world, we would be able to also play with time somehow, so we don’t have to wait for 30 minutes for an authorized payment to actually be charged and the email to be sent, and stuff like that.

In an ideal world we would be writing tests with Capybara/Cucumber, Cypress, or just plain Puppeteer, and it would be loading up Gmail or checking the Stripe dashboard or checking the Twilio account or some fake phone somehow that these things were successful.

In an ideal world we would be able to seed the database instantly for each PR on GitHub, so it takes less than 2-5 minutes to run the full suite of tests.

In an ideal world we would be writing full integration tests (including with third-party services) as a FE team, so QA was just formality in the end. By writing these automated tests in TDD style from the beginning, we would be simplifying our workflow greatly, as currently I have to refresh the page and click through 10, 20 clicks to reproduce some state in the app to see if my change in this deeply nested (in terms of flow) screen is the right style or integrates with the BE properly. I would like for tests to be written to programmatically perform any complicated interactions to reproduce the state.

In an ideal world, I think we should be writing full integration tests at the customer experience/perception level. That is, when given a new feature to work on, such as SSO (single-sign on) or MFA (multi-factor authentication), we should write a test that:

  1. Configures a company with SSO through the UI.
  2. Configures a user to prefer SSO through the UI.
  3. Logs in with SSO (including clicking the appropriate “approve” buttons on Google Oauth and such).
  4. Checks that we are logged into the app now (through the UI) by trying to perform some authorized action, etc..

If that’s not realistic, then what is the state of the art?

  • How can you have a full suite of integration tests, including testing integration with third-party services for a reasonably complicated app like this?
  • How can you have these integration tests run for each PR in under 5 minutes total?
  • If not possible to run that fast, what corners could be cut and how?

For example, we could separate tests into groups that run in parallel. We could perhaps run some of the tests only under certain conditions (just before release as opposed to for each PR). But when/how would we decide this?

Basically, I would like to start moving our company down the path of better integration testing of our web app and would like to know what is possible and at the same time practical for implementing within let’s say a year given just a developer or two.

integration – Is there a closed form expression for entropy on tanh transform of gaussian random variable?

so my question is if there’s a closed form (analytical?) expression for entropy for a variable $u$ defined as the $tanh$ of an gaussian random variable $x$. The reason I need an closed form solution, is that this is part of an neural network and I need to be able to derive(find gradient) based on the mean $mu$ and std $sigma$ of $x$.

For an random variable from gaussian distribution $x sim mathcal{N}(mu, sigma) $, I know that the entropy is: $h(X)=frac{1}{2}log(2pi sigma^2) + frac{1}{2}$, which is closed form.

I have the transform: $u= tanh(x)$, and I’d like to get the entropy of this random variable. I know from Differential Entropy wiki page that I can formulate the entropy for $u$ as:

$$ h(U) = h(X) + int f(x) log|frac{d (tanh(x))}{dx}| dx$$

with $f(x)=frac{1}{sigma sqrt{2pi}}e^{-frac{1}{2}(frac{x-mu}{sigma})^2}$ as the probability density function of the gaussian distribution. I’ve tried to solve the integral term in the right hand but haven’t been able to figure it out. I tried (with my limited knowledge of) wolfram alpha, without any success. Is there any closed form expression, and if so, do you know how it looks?

Many thanks in advance!