c++ – Design of Experiments data structure with C++17

Because you’re defining your member functions within the class declaration, you don’t need to use the inline keyword. All member functions declared in the class declaration are implicitly inline (including the default and copy constructors) are inline.

In your copy constructor, you can make use of member initializers to construct the copied maps directly, rather than default construct and assign:

doe_model(const doe_model& doe): required_explorations(doe.required_explorations),
    next(required_explorations.end()), number_of_explorations(doe.number_of_explorations)

There are a few places where you’re using two statements but they can be combined.

In update_config, you can decrement in the test:

if (--number_of_explorations.at(config_id) <= 0)

In get_next, you can combine the iterator increment and return statement:

return ++next;

This also uses the preincrement on the iterator, as it avoids creating a copy of the original iterator that is returned by the postincrement version, then immediately discarded.

In add_config, you have the potential of adding a config with a higher number of required explorations than are provided if that config already exists (if required_explorations is assigned to, no change is made to number_of_explorations). This may or may not be a problem.

You should consider adding a move constructor and a move assignment operator, which can avoid creating copies.

fourier analysis – How to analyse raw data to obtain frequency and phase from FFT?

I have some data from an experiment which has multiple “nodes” (dominant sine wave components) and some noise. I want to use FFT to find the amplitude and phase of these dominant components.

Currently, by editing the output of the Fourier function, i have managed to make a function to get the dominant frequencies.

I was wondering if there is a way to find the phase as well? (as well as sort it to only see the phase for the dominant components)

This is the code for the FFT function:

FFT(list_, tstep_, (Omega)max_, n_) := 
  Module({FTlist = Abs(Fourier(list)), 
    totaltime = Length(list) tstep, (CapitalDelta)(Omega), 
    completeFFTlist, plot, peaks, (CapitalOmega)peak},
   (CapitalDelta)(Omega) = N((2 (Pi))/totaltime);
   completeFFTlist = 
        For(i = 1, (2 (Pi) (i - 1))/totaltime > (Omega)max (Nor) 
          i > Length(FTlist), i++, 
         Sow({N((2 (Pi) (i - 1))/totaltime), FTlist((i))/
           Max(FTlist)})))), 1), 1);
   plot = 
     PlotRange -> {{0, (Omega)max}, All}, Frame -> True, 
     PlotStyle -> {SBlue, Thickness(0.005)}, 
     FrameLabel -> {"Angular Frequency (rad/s)", 
       "Power (Arbitrary units)"}, LabelStyle -> Directive(50), 
     PlotTheme -> "Classic");
   peaks = 
    Sort(FindPeaks(completeFFTlist((All, 2))), #1((2)) > #2((2)) &);
   peaks = If(peaks((1, 1)) == 1, Delete(peaks, 1), peaks);
   (CapitalOmega)peak = 
    Table(completeFFTlist((peaks((i, 1)), 1)), {i, 1, n});
   Return({(CapitalOmega)peak, (CapitalDelta)(Omega), 
     completeFFTlist, plot}));

This just makes a nice plot and tells me the n frequencies with the highest intensity/amplitude.

rds – What is the best practice when MYSQL Slabe misses data?

We have a huge database running into a TB plus running on AWS RDS. We have a MYSQL slave set up on premises basically for compliance and statutory needs. We have a big audit coming up in weeks. And I notice that why data seems to be syncing perfectly there are number of rows missing across many tables. There is no pattern which is emerging.

In such cases, what is the best recommended practice to trouble shoot? To set up everything again is not an option.

patterns and practices – Separation of data retrieval and processing in loops?

Often I need to get some data and process it in some way. For example getting a list of customers from an API and assemble some summary data on them. As an example, getting :

api_result = api.request('customers/get', params)

suggestion_frequencies = {}

for item in api_result:
    customer = Customer.fromDict(item)
    suggestions = get_suggestions(customer)
    update_suggestion_frequency(suggestion_frequencies, suggestions)


Suggestion        Number of Occurrences
Don't like color: 473
Too Expensive:    507
Too Hard:         98

This works but the data processing is coupled with the data retrieval. Ideally I’d like to be able to separate the retrieval of the customers from the processing so the code is less coupled. I’d like to instead be able to have something like:

customers = get_customers()

suggestion_frequencies = {}
for customer in customers:
    suggestions = get_suggestions(customer)
    update_suggestion_frequency(suggestion_frequencies, suggestions)

But this forces me to implement the loop twice. One loop is to hydrate the customer object from the API result, and then another loop is to process the data.

Is there a design pattern or way to separate these two steps yet still only loop over the data once?
(Or is it just accepted that these will usually be two separate steps with two loops? I am using python, but I’m interested in the general case)

functions – Convert list of InterpolatingFunctions to data


data = RandomReal(10, {20, 2}) // Sort;

g = Interpolation(data)

enter image description here

The source data can be extracted from the InterpolatingFunction

data === Transpose({g((3, 1)), g((4, 3))})

(* True *)

To uniformly resample the InterpolatingFunction on the domain

dom = g((1, 1))

(* {0.0472677, 9.7063} *)

data2 = {#, g(#)} & /@ (Subdivide(##, 9) & @@ dom);

 Plot(g(x), {x, dom((1)), dom((2))},
  PlotRange -> All),
 ListPlot({data, data2},
  PlotStyle -> {Blue, Red}))

enter image description here

c# – Data Repository and Complex Queries (DTO)

Following the classic 3 layer architecture

  • domain Model (a list of entities live there and has no dependencies)

  • DAL layer – My Repositories lives there with DBContext implementation (Ado.net) Dal return pure entities ( reference domain model)

  • Service Layer ( Business Layer ) expose methods of repositories ( reference DAL Layer and Domain Model)

  • UI Layer(WPF) > Service Layer(BLL) > Repositories

MY question is related to the following concepts

Reading article upon article, state that Data Repositories should return only entities.


public interface ICustomerRepository 

    IEnumerable<Customer> FindByText(string TextToFind);
    Customer GetByID(string ID);
    IEnumerable<Customer> GetAll();
    void Update(Customer CustomerToAdd);
    void Delete(params string() Ids);

Then I ask my self.

  • What if i need to show a list of customer with some other complex inner join, union query.
  • end a lot of more other queries related to other view requirements.

(actually in our accounting application 90% of the screens are never shown data in their pure form(entities). always they are in form with other inner joins , sum amounts etc..
e.g customer : Name,AccountName(inner join from accounts), Balance (inner join from ledger), etc..

Some state that is ok to put this query in repository others say is not good because is not an entity

Where this logic goes? since repositories dose not allow me to return other than entity model.
and also CustomerService in BLL Layer ( dose not allow me to return DTO in methods)

I read that CQRS comes to the stage to solve UI Queries

Ok, lets say that i follow cqrs for query side ( lets skip command as commands and updates go through repository)

Now I end up having:

  • CustomerRepository (DAL Layer)
  • CustomerService (BLL – service layer) which is just exposed repository methods and maybe some other related things
  • CustomerQueries class (BLL – service layer) which contain any complex
    query (DTO) related to customer and has direct connection to sql converting datatable to relative dto and give it to UI layer

My question is this proper way to follow?

which layer CQRS live? in (business layer which some call it service layer?) or in DAL Layer where my repositories live

Many times i found my self much easy way just to type CustomerServices. and the intellisense giving me all the related information I might want from customer, some functions return dto others return some amounts other times just some bool to check some rules which require complex sql queries)

The problem is that, since CustomerService as they say, should only be responsible to call relative CustomerRepository fetch or update only things related to repository,

where i put logic for complex queries and in which layer?

javascript – How to parse a HTML template from Json and populate the data

I have a requirement to show an HTML template served from a JSON API, with custom tags on it. Those custom tags should be populated with another JSON object. Please help.

I’m able to populate the HTML in the page, but not able to bind the custom variable with a JSON value in it.


import React, {  useEffect, useState } from 'react';
import axios from 'axios';
import ReactHtmlParser from 'react-html-parser';
const DynamicContainer = () => {
const (result, setResult) = useState(());
let HTML3='';
let title = '';
  useEffect(  () => {
    axios.get(` http://localhost:3000/data`)
    .then(res => {
      .catch(error => {
  }, ());

  if(result !=""){
    title = result.jsonContent.title
    HTML1 =  result.jsonHtml;
  return (

  //<div className="" dangerouslySetInnerHTML={{ __html: result.jsonHtml}}></div>
export default DynamicContainer;

// JSON format (
somewhat similar to this JSON format, having an HTML template and a JSON value
This is in my local, in production, there would be 2 APIs serving HTML and JSON.
) :

      "<div class="card "><h1>${title}</h1></div>"
    "jsonContent": {
      "title": "Hello World"

Should postman tests test real or mocked data?

I want to write API tests using postman and then run them on Jenkins. My question is should those tests target real application data or should I set up some kind of mocked data just for those tests?

python – YouTube Data API how to extract more than 100 comments?

I am using YouTube Data API v3 to extract comments but the maximum I can get is 100 comments.

This is the code

url = "https://www.googleapis.com/youtube/v3/commentThreads?part=snippet&maxResults=1000&order=relevance&videoId=cosc1UEbMbY&key=MYKEY"

data = requests.get(url).json()

comments = ()
co = 0;

for i in range(len(data('items'))):

Even if I set maxResults parameter to 1000, it only returns 100 comments.
How can I fix this problem?

Thank you

How to separate 2 different types of search events in my Google Analytics data?

I’d like to propose Option #3: GA’s dedicated search tracking with Search Category set. This will give you the four reports under Behavior > Site Search and allow you to analyze search behavior as a whole or divided by type.

All of the setup for this is in Google Analytics, not your JavaScript; it is based on the query string being present.

Note: this setup uses an Advanced Filter, which can’t be verified via preview, so apply it to a testing View before making the changes to your main View.

First, in your View settings, enable Site Search Tracking and set q as the query parameter.

Site Search Settings area of GA View settings page

In the screenshot I haven’t checked “strip query parameters out of URL” but I personally would select that. Also, this step should be fine to do in your main View right off.

For categories, you’ll need a View Filter set up as follows:

Filter using portion of URL for search category value

You’ll create a new Custom filter of type Advanced, find Request URIs that match your two search URLs, capture the category identifier with (blog|products), and output it to the field Site Search Category.

Once you have the filter defined and it is working the way you need it to, you can apply it to your main View with the Apply Existing Filter option also shown in the screenshot.