reference request – Algorithm to calculate partitions of a graph in N cliques

Does anyone know an efficient algorithm for calculating the partition of a graph in N cliques?

Note that N is the number of cliques and not their size.

I've heard about the problem of 2 cliques, but the more general version interests me.

Thank you!

np hard – problem-solving algorithm NP can solve more than normal algorithms, will it be P = NP in some cases or limiter?

Suppose that all algorithms can find an optimal solution, like the problem of Travel Selesman for 25 cities, which means 25! possibility in polynomial time with use of the power of the supercomputer
so if there is an algorithm that can solve for 50 cities that means 50! possibility
with the use of supercomputer power that means it could close the range of many possibilities to a small number that could solve by 25 cities

so what does this algorithm mean, p = np in the limit of 50 cities, or simply it can get a lower response to the limit but it can not solve higher numbers like 100! although it can throw a lot of possibility of 100!

What is the best way to avoid using Go to write the pesudocode for an algorithm?

Thanks for contributing an answer to Computer Science Stack Exchange!

  • Please make sure to respond to the question. Provide details and share your research!

But to avoid

  • Ask for help, clarification, or answer other answers.
  • Make statements based on the opinion; save them with references or personal experience.

Use MathJax to format equations. MathJax reference.

To learn more, read our tips for writing good answers.

How do you recover your ranking after an update of the Google SEO algorithm? If you have new tips, please share

How do you recover your ranking after an update of the Google SEO algorithm? If you have new tips, please share

Python – Simple Kivy App for Bootstrap Algorithm

I have a very simple application built using Kivy, to perform the bootstrap algorithm (statistics). We have to grab some samples in the form x1, x2, ..., xn (these will be received as a chain "x1, ..., xn" but then will be transformed into a list of samples. From these samples, we will use the bootstrap algorithm to approximate the distribution of the sampling mean of the population.

We have the main widget using widget class, and we put a grid layout of 1 line, 2 columns (the left one will be for the options, the one on the right will be the layout of the distribution). For the grid of options, we put another grid 4×4, the last grid being a grid 3×1 including the buttons of the result.

I wonder if there are more efficient and effective ways to write the code to run the application.


Example:

enter the description of the image here


Code:

randomly import

import matplotlib.pyplot as a plt

from kivy.config import config
Config.set ('graphics', 'resizable', False)
of kivy.garden.matplotlib.backend_kivyagg import FigureCanvasKivyAgg
from the import window of kivy.core.window
since the application import kivy.app
from the kivy.uix.widget import widget
since kivy.uix.textinput import TextInput
from the import button kivy.uix.button
from kivy.uix.gridlayout import GridLayout


Window.clearcolor = (0,0.5,0.9,0.1)
Size of the window = (800, 400)


#Bootstrap method:


BootstrapMethod class (object):
def __init __ (auto):
self.start_button = BootstrapButton (text = "Bootstrap")
self.layout = BootstrapGrid ()
self.options = BootstrapOptions ()


RunButtonLayout Class (GridLayout):
def __init __ (auto):
super () .__ init __ (lines = 3, cols = 1)
self.add_widget (PlotNormed (text = "Normal Histogram"))
self.add_widget (PlotActual (text = "Actual Histogram"))
self.add_widget (BootstrapButton (text = "Run bootstrap"))


PlotNormed class (Button):
def on_release (auto):
xbars = self.parent.parent.parent.xbars
if xbars! = None:
self.parent.parent.parent.ax.cla ()
self.parent.parent.parent.ax.hist (xbars, normed = True)
self.parent.parent.parent.fig.figure.canvas.draw ()


PlotActual class (Button):
def on_release (auto):
xbars = self.parent.parent.parent.xbars
if xbars! = None:
self.parent.parent.parent.ax.cla ()
self.parent.parent.parent.ax.hist (xbars)
self.parent.parent.parent.fig.figure.canvas.draw ()


BootstrapButton Class (Button):
def on_release (auto):
Great (). on_release ()
sample_string = self.parent.parent.sample_input.text.split (& # 39;)
self.samples = [float(i) for i in sample_string]
        self.bootstrap_algorithm ()

bootstrap_algorithm def (auto):
xbars = []
        n = int (self.parent.parent.bootstrap_sample.text)
if n> len (self.samples):
n = len (self.samples)
nb = int (self.parent.parent.bootstrap_iteration.text)
for i in the range (nb):
xbar = sum ([random.sample(self.samples, 1)[0] for j in the beach (n)]) / not
xbars.append (xbar)
self.parent.parent.parent.xbars = xbars
self.parent.parent.parent.ax.cla ()
self.parent.parent.parent.ax.hist (xbars)
self.parent.parent.parent.fig.figure.canvas.draw ()


BootstrapOptions class (GridLayout):
def __init __ (auto):
super () .__ init __ (lines = 2, cols = 2)
self.sample_input = TextInput (text = "Example of entry")
self.bootstrap_iteration = TextInput (text = "Number of bootstrap iterations")
self.bootstrap_sample = TextInput (text = "Number of samples for resampling")
self.run_layout = RunButtonLayout ()

self.add_widget (self.sample_input)
self.add_widget (self.bootstrap_iteration)
self.add_widget (self.bootstrap_sample)
self.add_widget (self.run_layout)


BootstrapGrid Class (GridLayout):
def __init __ (auto):
super () .__ init __ (lines = 1, cols = 2, fill = 5, spacing = 5)
fig, ax = plt.subplots ()
self.options = BootstrapOptions ()
self.add_widget (self.options)
self.fig = FigureCanvasKivyAgg (fig)
self.ax = ax
self.add_widget (self.fig)
self.xbars = None



### ###


class before (Widget):
def __init __ (auto):
super () .__ init __ ()
self.start ()

def start (auto):
self.bs = BootstrapMethod ()
self.add_widget (self.bs.layout)
self.bs.layout.size = (800, 400)


mobile classApp (App):
def build (auto):
root = Front ()
return the root

app = mobileApp ()
app.run ()

hash algorithm for very similar images

I am looking for an image hashing algorithm (for the moment, I am thinking of a perceptual hashing) which is able to preserve the details a little more than the usual hash algorithms. The images that I must distinguish are quite similar at first and the details have to be taken into account. I do not really want to reduce them. I need to know if a given image matches an image already in a database, but I want it to at least resist trimming and rotation (cropped / rotated images should give the same or near result An extra bonus would be if it's also resistant to changing a few pixels (with a threshold, of course.). My fear with the usual perception hashes is that they reduce the image so much, and the images I have all seems too similar. Thank you!

optimization – Implementation of a better algorithm for calculating bacterial growth

I am working on a mathematical model that describes the growth of 4 different bacterial populations and cells of the immune system under certain conditions. The model is governed by the equations below.

POPULATION 1:

$ { underbrace { frac {dN_ {PS}} {dt}} _ { text {Rate of variation of population}} = underbrace {r N_ {PS}} {{exponential growth}} cdot underbrace { left (1- frac {N_ {PS} + N_ {PR}} {K} right)} _ { text {Growth Limitation}} – underbrace { theta_ {PS} N_ {PS}} { text {Natural Death}} – underbrace {A_ {PS} N_ {PS}} {{text} Biofilm Formation}} + underbrace {D_ {BS} N_ {BS}} _ { text {Biofilm Dispersion}} – under {} {PS} N_ {PS}} { text {Mutation Rate}} – underbrace { eta delta_ {PS} A_ {m} N_ { PS}} _ { text {Antibiotic Inhibition}} – left { underbrace { Gamma N_ {PS} I} _ { text {Immune System}} right }} $

POPULATION 2:

$ { frac {dN_ {BS}} {dt} = (r-c_ {b}) N {{BS} cdot left (1- frac {N_ {BS} + N_ {BR}} {K} right) – theta_ {BS} N_ {BS} + A_ {PS} N_ {PS} -D_ {BS} N_ {BS} – phi_ {BS} N_ {BS} – anda delta_ {BS} A_ {m} N_ {BS} – left { Gamma N_ {BS} I right }} $

POPULATION 3:

$ { frac {dN_ {PR}} {dt} = (r-c_ {r}) N {{PR} cdot left (1- frac {N_ {PS} + N_ {PR}} {K} right) – theta_ {PR} N {{PR} -A_ {PR} N {{PR} + D_ {BR} N {{BR} + phi_ {PS} N {{PS} – eta delta_ {PR} A_ {m} N_ {PR} – left { Gamma N_ {PR} I right }} $

POPULATION 4:

$ { frac {dN_ {BR}} {dt} = (r- (c_ {b} + c_ {r})) N_ {BR} cdot left (1- frac {N_ {BS} + N_ { BR}} {K} right) – theta_ {BR} N {{}} + A_ {PR} N {{PR} -D_ {BR} N {{}} + phi_ {BS} N_ {BS} – eta delta_ {BR} A_ {m} N_ {BR} – left { Gamma N_ {BR} I right }} $

NAIV CELLS (IMMUNE SYSTEM):

$ { frac {dV} {dt} = frac {- sigma VB} { pi + B}} $

EFFECTOR CELLS (IMMUNE SYSTEM):

$ {{ frac {dE} {dt} = (2 sigma V + sigma E) cdot frac {B} { pi + B} -hE left (1- frac {B} { pi + B} right)}} $

MEMORY CELLS (IMMUNE SYSTEM):

$ {{ frac {dM} {dt} = fEh left (1- frac {B} { pi + B} right)}} $

TOTAL BACTERIAL DENSITY:

$ { frac {dB} {dt} = N_ {PS} + N_ {PR} + N_ {BS} + N_ {BR}} $

DENSITY OF THE IMMUNE SYSTEM:

$ { frac {dI} {dt} = V + E + M} $

ANTIBIOTIC UPTAKE 1:

$ {{ eta} = begin {cases}
1 & t_ {1} leq t leq t_ {1} + t_ {2} \
0 & t_ {1}> t : or : t> t_ {1} + t_ {2}
end {cases}} $

ANTIBIOTIC UPTAKE 2:

$ {{ eta} = begin {cases}
1 & B geq varOmega \
0 & B < varOmega
end {cases}} $

I am interested in how $ N_ {PS}, N_ {PR}, N_ {BS}, N_ {BR}, V, E, M $ change over time for which I implemented an algorithm in Python to solve the equations. Greek letters and other undescribed parameters (eg, r, K, etc.) are mainly constants defined at the beginning of the program execution.

An example of the functions used is shown below. Currently, as you can also see in the code, I use an Euler method to solve the equations. However, I would like to implement at least Heun's method or even a higher order Runge-Kutta method to solve them.

I'm already stuck with Heun's method and I do not know how to implement it. I ask for help on how to modify the following code and replace it for example by Heun if that is possible with these equations.

def sensitive_bacteria_PS (previous_density, growth_rate, density_PR, maximum_density_P, death_rate_PS, attachment_rate_PS, mutation_rate_PS, antibiose_pol,

previous_density + ((previous_density * (1) (previous_density + density_PR) / maximum_density_P)

algorithms – Creating a waste optimization algorithm to cut a 1d block

I have a block of one-dimensional material. I run an analysis that divides the material into usable and unusable regions.

In a manufacturing process, said material is cut and unusable regions are discarded. My two limitations are:

  • The plant considers that any small usable area is unusable because it can not be processed.

  • The cutting tool of the factory can not cut under a minimum width, so that defective areas smaller than this width must be spread in a good region to reach the minimum width. This is the waste that I would like to minimize.

The number of regions will not exceed 50

I would like to create an algorithm that optimizes these bad growing regions and reduces the good ones by creating as little waste as possible.

For example: a too small bad region between a good and a small good could be expanded as much as possible in the good, because it will become a waste anyway.

My first hypothesis would be that it is a problem of vinyl record, but above a level of initiation, I have no experience in the material.

Mathematica Algorithm for FindIndependentEdgeSet[]?

The documentation does not mention any algorithm.
My hypothesis is the Blossom algorithm because I do not know of any better.

Post it in the hope that someone knows for sure.

Determine when an algorithm will be slower than another

I am studying for a computer exam and have spent the following question on an old paper and I need help for that.

When will algorithm A be slower than algorithm B? Show your answer with the help of an example. In addition, what will be the SUM value at the end of each algorithm if the size is set to 10,000?

Algorithm A
SET sum to 0
FOR i = 1 at the waist
FOR j = 1 to 10,000
sum = sum + 1

Algorithm b
SET sum to 0
FOR i = 1 at the waist
FOR j = 1 at the waist
sum = sum + 1

I came with this answer but not sure if it is correct:

The algorithm A will be slower than the algorithm B when the performance of the algorithm is directly proportional to the cube or to a size larger than that of the input dataset, for example if the Big notation O becomes O (N3) or O (N4) or O (N5) etc. Big O O notation (N3) nesting for loops in two other for loops:

Set Sum TO 0
For i = 1 at the waist
For k = 1 at the waist
For l = 1 at the waist
For j = 1 to 10,000
sum = sum + 1