python – Count the elements of a numpy array in an interval

I have a question and I really don't know how to do it, I have to count the number of times a number repeats in an interval. The arrangement is as follows:

datos = np.array((0.525,0.546,0.522,0.577,0.563,0.572,0.558,0.56,0.508,0.562,0.564,0.537,0.574,0.586))

and the intervals with the result to be output are:

INTERVALO   RESULTADO
0.5-0.51    1
0.51-0.52   0
0.52-0.53   2
0.53-0.54   1
0.54-0.55   1
0.55-0.56   1
0.56-0.57   4
0.57-0.58   3
0.58-0.59   1

Everyone knows how to do it, I thought to separate datos in the sub arrays using np.split but from there I don't know what to do anymore, I would really appreciate your help again, as it blocks me.

python – Why the vectorized function numpy does not apply to each element

I am trying to apply a vectorized function to a 1P NumPy array (test).

If an element is higher than a certain threshold, the function is applied, otherwise a 0 is returned.

Passing the function on the entire array gives an array of zeros (result # 1)

However, it works for certain sections of the large table (result # 2) and a small table (result # 3)

Can you help me understand why this is?

import numpy as np
test = np.array((-58.08281  , -47.07844  , -39.38589  , -38.244213 , -36.04118  ,
       -35.17719  , -47.651756 , -47.123497 , -38.47037  , -31.427711 ,
       -35.980206 , -39.04678  , -43.247276 , -29.217781 , -26.16616  ,
       -23.175611 , -19.073223 , -19.573145 , -19.291908 , -19.084608 ,
       -24.24286  , -26.768343 , -29.40547  , -42.254036 , -32.5126   ,
       -27.8232   , -26.521381 , -18.53816  , -16.300032 , -14.897881 ,
       -11.96727  , -11.895884 , -11.958228 , -11.689035 , -19.331993 ,
       -22.528988 , -14.850136 , -10.7898   , -10.738896 , -11.510415 ,
       -11.297523 , -14.9558525, -18.261246 , -20.11386  , -35.434853 ,
       -36.547577 , -29.713285 , -35.055378 , -19.717499 , -15.524372 ,
       -14.905738 , -11.690297 , -12.295127 , -14.571337 , -14.457521 ,
       -20.896961 , -35.145    , -39.106945 , -20.592056 , -19.292147 ,
       -21.957949 , -20.131369 , -31.953508 , -24.577961 , -23.88112  ,
       -16.549093 , -16.742077 , -22.181223 , -21.692726 , -34.572075 ,
       -20.111103 , -18.57012  , -12.833547 , -11.325545 , -12.807129 ,
       -11.844269 , -19.830124 , -21.79983  , -18.484238 , -12.855567 ,
       -11.830711 , -14.83697  , -14.618052 , -19.990686 , -30.934792 ,
       -27.72318  , -17.222315 , -14.099125 , -16.516563 , -15.129327 ,
       -19.21385  , -41.145554 , -37.12835  , -20.674335 , -17.670841 ,
       -26.641182 , -26.721628 , -29.708376 , -16.29707  , -15.220005 ,
       -11.475418 ,  35.859955 , -10.404102 ,  35.160667 , -11.339685 ,
       -17.627815 , -18.65314  , -25.346134 , -38.297813 , -22.460407 ,
       -21.334377 , -16.922516 , -10.733174 ,  35.263527 ,  35.078003 ,
        35.26928  ,  35.44266  ,  35.89205  , -10.965962 , -16.772722 ,
       -10.638295 ,  35.37294  ,  35.32364  ,  35.271263 ,  35.900078 ,
        35.145794 , -12.282233 , -14.206524 , -18.138363 , -37.339016 ,
       -26.27323  , -27.531588 , -25.00942  , -13.963585 , -12.315678 ,
       -10.978365 ,  35.439877 , -10.534686 , -11.77856  , -12.630129 ,
       -22.29188  , -32.74709  , -29.052572 , -16.526686 , -18.223225 ,
       -19.174236 , -18.920668 , -34.266537 , -23.23388  , -19.992903 ,
       -13.9729805, -16.85691  , -20.88271  , -21.805904 , -24.517344 ,
       -17.412155 , -15.050234 ,  35.047886 , -10.27907  , -10.765995 ,
       -11.394721 , -34.574    , -18.185272 , -15.156159 , -10.370025 ,
       -11.406872 , -13.781429 , -13.863158 , -24.35263  , -29.509377 ,
       -24.758411 , -14.150916 , -13.686075 , -15.366934 , -14.149103 ,
       -22.916718 , -35.810047 , -33.369896 , -17.931974 , -18.65556  ,
       -28.330248 , -27.015589 , -23.890095 , -15.020579 , -13.920487 ,
        35.49385  ,  35.613037 ,  35.326546 ,  35.1469   , -12.024554 ,
       -17.770742 , -18.414755 , -31.574192 , -35.00205  , -20.591629 ,
       -21.097118 , -14.166552 ,  35.61772  ,  35.196175 ,  35.884003 ,
        35.032402 ,  35.289963 ,  35.18595  , -36.364285 , -10.158181 ,
        35.040634 ,  35.349873 ,  35.31796  ,  35.87602  ,  35.88828  ,
        35.086105 , -12.404961 , -13.550255 , -20.19417  , -35.630135 ,
       -23.762396 , -27.673418 , -19.928736 , -12.206515 , -11.781338 ,
        35.307823 ,  35.67385  , -10.780588 , -11.199528 , -13.561855 ,
       -24.982666 , -30.838753 , -25.138466 , -16.61114  , -20.002995 ,
       -18.823566 , -21.581133 , -25.644733 , -22.914455 , -17.489904 ,
       -13.714966 , -18.483316 , -20.454823 , -25.238888 , -20.592503 ,
       -17.511456 , -13.5111885,  35.399975 , -10.711888 , -10.577221 ,
       -13.2071705, -27.878649 , -16.227467 , -13.394671 ,  35.33075  ,
       -10.933496 , -12.903596 , -13.261947 , -23.191689 , -36.082005 ,
       -26.252464 , -14.935854 , -14.955426 , -16.291502 , -15.563564 ,
       -27.91648  , -30.43707  , -27.09887  , -16.93166  , -19.03229  ,
       -26.68034  , -26.50705  , -22.435007 , -15.312309 , -13.67744  ,
        35.70387  ,  35.197517 ,  35.21866  ,  35.759956 , -12.934032 ,
       -18.348143 , -19.073929 , -36.864773 , -32.881073 , -20.560263 ,
       -20.530846 , -13.128365 ,  35.65545  ,  35.465275 ,  35.028538 ,
        35.842434 ,  35.676643 , -17.01441  , -17.217728 ,  35.667717 ,
        35.871662 ,  35.92965  ,  35.316013 ,  35.096027 ,  35.02661  ,
        35.988937 , -12.0597515, -13.201061 , -20.259245 , -28.855875 ,
       -21.791933 , -25.400242 , -17.618946 , -11.611944 , -11.329423 ,
        35.063614 ,  35.825493 , -10.553531 , -10.820301 , -13.883024 ,
       -22.231556 , -26.921532 , -31.872276 , -18.039211 , -19.713062 ,
       -20.517511 , -21.620483 , -26.919012 , -20.787134 , -17.330051 ,
       -13.198881 , -15.984946 , -19.181019 , -21.50328  , -25.311642 ,
       -18.11811  , -14.696231 , -10.136784 , -10.480961 , -11.110486 ,
       -13.739718 , -28.865023 , -15.966995 , -13.198223 ,  35.18759  ,
       -10.803377 , -12.718526 , -13.597855 , -23.472122 , -34.405643 ,
       -24.122065 , -14.643904 , -14.425363 , -15.651573 , -15.197855 ,
       -25.13602  , -33.207695 , -26.908777 , -17.217882 , -19.061764 ,
       -27.06517  , -28.88142  , -21.721449 , -14.84623  , -12.997027 ,
        35.853565 ,  35.51484  ,  35.660423 ,  35.982292 , -12.461762 ,
       -17.52755  , -19.008127 , -32.69878  , -30.82928  , -20.193447 ,
       -19.172876 , -12.901536 ,  35.05082  ,  35.915546 ,  35.254303 ,
        35.797028 , -14.470562 , -22.461277 , -15.07134  ,  35.970448 ,
        35.198704 ,  35.945583 ,  35.362762 ,  35.306732 ,  35.064957 ,
        35.10975  , -11.703257 , -13.411005 , -20.08778  , -28.905445 ,
       -22.59493  , -25.155657 , -17.814808 , -11.842859 , -11.154184 ,
        35.989094 ,  35.854362 , -10.2389765, -10.827884 , -14.010275 ,
       -25.168896 , -33.99552  , -22.858255 , -16.562387 , -19.22073  ,
       -18.317003 , -23.036928 , -25.22068  , -21.934307 , -16.469448 ,
       -13.88927  , -18.307293 , -20.485218 , -29.06332  , -19.628113 ,
       -16.496414 , -12.351503 ,  35.66623  , -10.330103 , -10.866837 ,
       -16.813847 , -21.454565 , -15.892494 , -12.269305 ,  35.174488 ,
       -11.898882 , -13.1494465, -15.517578 , -35.11971  , -29.069548 ,
       -19.153015 , -13.194953 , -14.334308 , -14.483275 , -15.592762 ,
       -30.123589 , -38.262245 , -24.752253 , -17.36696  , -22.627728 ,
       -29.787828 , -44.489254 , -17.438164 , -13.678364 , -11.26264  ,
        35.92086  ,  35.600876 ,  35.231567 ,  35.960655 , -13.438512 ,
       -16.794493 , -19.414097 , -33.008324 , -23.844492 , -18.63253  ,
       -17.060545 , -10.566847 ,  35.735447 ,  35.061024 ,  35.95225  ,
       -11.117262 , -18.978222 , -39.73106  , -11.048341 ,  35.58616  ,
        35.699783 ,  35.32885  ,  35.09172  ,  35.119743 ,  35.753242 ,
        35.73512  , -12.641587 , -14.861554 , -25.59355  , -29.808552 ,
       -24.463276 , -26.617489 , -15.665792 , -11.706967 , -11.054789 ,
        35.413254 ,  35.13033  , -10.968152 , -11.514641 , -17.074472 ,
       -31.623056 , -40.51703  , -18.116985 , -15.995826 , -18.33452  ,
       -17.266975 , -28.274193 , -24.104795 , -21.711021 , -15.209898 ,
       -15.003292 , -20.39471  , -21.562126 , -34.197975 , -16.957975 ,
       -14.923981 , -10.418877 ,  35.874657 , -10.214642 , -11.880876 ))

test2 = np.array((5,5,5))

Thresh = -10
Ratio = 5

#Defining function
def gr(x):
    if x >=Thresh:
        return Thresh + (x-Thresh)/Ratio
    else:
        return 0
#vectorising function
gr_v = np.vectorize(gr)

#RESULTS
##1
print(sum(gr_v(test))) #0
##2
print(sum(gr_v(test2)))
##3
print(sum(gr_v(test(200:250))))

numpy – Binning while avoiding the need for for loops in Python?

I am working on a relatively simple binning program, where I take a 5D array and I bin it based on two 3D arrays to create a contour plot. See the sample code below. In reality, my arrays are large (27,150,20,144,288), so executing a nested for 4 loop as shown below takes a long time. Is there a way to speed up this loop and ideally avoid the need for all of these loops? My apologies in advance if it is not clear – I am new to this!

S_mean = np.random.rand(5,10,10,10)
T_mean = np.random.rand(5,10,10,10)
Volume_mean = np.random.rand(2,5,10,10,10)


T_bins = np.linspace(0,1,36)
S_bins = np.linspace(0,1,100)


int_temp = ()
int_sal = ()

for i in range(5):
    int_temp.append(np.digitize(np.ndarray.flatten(T_mean(i,:,:,:)), T_bins))
    int_sal.append(np.digitize(np.ndarray.flatten(S_mean(i,:,:,:)), S_bins))

volume_sum = np.zeros((2,5,S_bins.size,T_bins.size))

# This is the problem loop

for k in range(2):
    for l in range(5):
        for i in range(T_bins.size):
            for j in range(S_bins.size):
                volume_sum(k,l,j,i)=(np.nansum(np.ndarray.flatten(Volume_mean(k,l,:,:,:))
                                    (np.argwhere(np.logical_and(int_temp(l) == i, int_sal(l) == j)))))

# The output I am trying to get

plt.pcolormesh(T_bins, S_bins, volume_sum(0,0,:,:))
plt.show()

numpy – Find pixel indices in a shape: Opencv and Python

Suppose I have a hollow, curved (and not necessarily convex) mask that I received from my pretreatment steps:

Hollow circle mask

I now want to try to select all the pixels that occur inside this shape and add them to the mask, as follows:

Filled circle mask

How can I do this in Python?


Code to generate the examples:

import cv2
import numpy as np
import matplotlib.pyplot as plt

# Parameters for creating the circle
COLOR_BLUE = (255, 0, 0)
IMAGE_SHAPE = (256, 256, 3)
CIRCLE_CENTER = tuple(np.array(IMAGE_SHAPE) // 2)(:-1)
CIRCLE_RADIUS = 30
LINE_THICKNESS = 5 # Change to -1 for example of filled circle

# Draw on a circle
img = np.zeros(IMAGE_SHAPE, dtype=np.uint8)
img_circle = cv2.circle(img, CIRCLE_CENTER, CIRCLE_RADIUS, COLOR_BLUE, LINE_THICKNESS)
circle_mask = img_circle(:, :, 0)

# Show the image
plt.axis("off")
plt.imshow(circle_mask)
plt.show()

Fill in the values ​​in the numpy array that are between a certain value

Let's say I have a table that looks like this:

a = np.array((0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 0))

I want to fill in the values ​​between 1 and 1.
This would therefore be the desired output:

a = np.array((0, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 1, 1, 1, 0))

I took a look at this answer, which gives the following:

array((0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
   1, 1)) 

I'm sure this answer is really close to the exit I want. However, although I have tried countless times, I cannot change this code to make it work as I want, as I am not very proficient with numpy arrays.
Any help is greatly appreciated!

numpy – Get rid of loops for slow in Python

I have created a small python script that will generate test sets for my project.

The script generates 2 datasets with the same dimensions n*m. One contains 0.1 binary values ​​and the other contains floats.

# Probabilities must sum to 1
AMOUNT1 = {0.6 : get_10_20,
           0.4 : get_20_30}

AMOUNT2 = {0.4 : get_10_20,
           0.6 : get_20_30}

OUTCOMES = (AMOUNT1, AMOUNT2)

def pick_random(prob_dict):
    '''
    Given a probability dictionary, with the first argument being the probability,
    Returns a random number given the probability dictionary
    '''
    r, s = random.random(), 0
    for num in prob_dict:
        s += num
        if s >= r:
            return prob_dict(num)()


def compute_trade_amount(action):
    '''
    Select with a probability, depending on the action.
    '''
    return pick_random(OUTCOMES(action))


ACTIONS = pd.DataFrame(np.random.randint(2, size=(n, m)))
AMOUNTS = CLIENT_ACTIONS.applymap(compute_trade_amount)

The script runs correctly and generates the output that I need, but if I want to scale to for many dimensions the for in loop pick_random() slows down my computation time.
How can I get rid of it? Maybe with some understanding of the table using numpy?

What throws my reasoning is the if-stmt. Because sampling has to happen with probability.

numpy – MNIST python machine learning dataset

I am a noob with machine learning and I have been struggling with this for a few days now and I don't understand why my neural network is having trouble classifying the mnist dataset. I checked my calculations and used the gradient check, but I can't seem to find the problem.

import pickle as pc
import numpy as np
import matplotlib.pyplot as mb
class MNIST:
#fix: gradient checking maybe not working, maybe backprop not working, symetrical updating, check if copying correctly


    def processImg(self):

        '''
        #slower than pickle file
        inTrset = np.loadtxt("mnist_train.csv", delimiter = ",");
        inTestSet = np.loadtxt("mnist_test.csv", delimiter = ",");
        fullX = np.asfarray(inTrset(:,1:))
        fullY = np.asfarray(inTrset(:, :1))
        '''

        with open("binaryMNIST.pkl", "br") as fh:
            data = pc.load(fh)


        img_dim = 28;
        features = 784;
        m = 60000
        test_m = 10000;

        fullX = (np.asfarray(data(0)))
        bias = np.ones((60000, 1))
        fullX = np.hstack((bias, fullX))

        fullY = np.asfarray(data(1))

        testX = (np.asfarray(data(2)))
        bias2 = np.ones((10000, 1))
        testX = np.hstack((bias2, testX))

        testY = np.asfarray(data(3))

        fullY = fullY.astype(int)
        testY = testY.astype(int)

        iden = np.identity(10, dtype = np.int)
        oneHot = np.zeros((m, 10), dtype = np.int)
        oneHot_T = np.zeros((test_m, 10), dtype = np.int)

        #creates m number of one, zeros vector indicating the class
        for i in range(test_m):
            oneHot_T(i) = iden(testY(i), :)

        for i in range(m):
            oneHot(i) = iden(fullY(i), :)

        trainX = fullX(:40000, :)
        trainY = oneHot(:40000, :)

        valX = np.asfarray(fullX(40000:, :))
        valY = np.asfarray(oneHot(40000:, :))


        self.trainX = trainX
        self.trainY = trainY
        self.valX = valX
        self.valY = valY
        self.testX = testX
        self.oneHot_T = oneHot_T


    def setThetas(self):
        #784 features
        #5 nodes per layer (not including bias)
        #(nodes in previous layer, nodes in next layer)
        #theta1(785, 5) theta2(6, 5) theta3(6, 10)

        #after finishing, do big 3d matrix of theta and vectorize backprop

        params = np.random.rand(4015)
        self.params = params



    def fbProp(self, theta1, theta2, theta3):

        #after calculating a w/sig(), add bias
        m = np.shape(self.trainY)(0)
        z1 = np.array(np.dot(self.trainX, theta1), dtype = np.float64)

        a1 = self.sig(z1)
        bias = np.ones((40000, 1))
        a1 = np.hstack((bias, a1))
        z2 = np.dot(a1, theta2)
        a2 = self.sig(z2)
        a2 = np.hstack((bias, a2))
        z3 = np.dot(a2, theta3)
        hyp = self.sig(z3)

        g3 = 0
        g2 = 0
        g1 = 0

        for i in range(m):
            dOut = hyp(i, :) - self.trainY(i, :)
            d2 = np.dot(np.transpose(dOut), np.transpose(theta3))
            d2 = d2(1:) * self.sigG(z2(i, :))
            d1 = np.dot(d2, np.transpose(theta2))
            d1 = d1(1:) * self.sigG(z1(i, :))

            g3 = g3 + np.dot(np.transpose(np.array(a2(i, :), ndmin = 2)), np.array(dOut, ndmin = 2))
            g2 = g2 + np.dot(np.transpose(np.array(a1(i, :), ndmin = 2)), np.array(d1, ndmin = 2))
            g1 = g1 + np.dot(np.transpose(np.array(self.trainX(i, :), ndmin = 2)), np.array(d1, ndmin = 2))

        self.theta1G = (1/m) * g1
        self.theta2G = (1/m) * g2
        self.theta3G = (1/m) * g3


    def gradDescent(self):

        params = np.array(self.params)
        theta1 = params(0:3925)
        theta1 = np.resize(theta1, (785, 5))
        theta2 = params(3925:3955)
        theta2 = np.resize(theta2, (6, 5))
        theta3 = params(3955:4015)
        theta3 = np.resize(theta3, (6, 10))

        for i in range(self.steps):
            J = self.error(theta1, theta2, theta3, self.trainX, self.trainY)
            print("Iteration: ", i+1, " | error: ", J)
            self.fbProp(theta1, theta2, theta3)
            theta1 = theta1 - (self.alpha * self.theta1G)
            theta2 = theta2 - (self.alpha * self.theta2G)
            theta3 = theta3 - (self.alpha * self.theta3G)



        #On test set
        correct = self.test(theta1, theta2, theta3)
        print(correct/100, "%")


    def error(self, params, X, y):
        theta1 = params(0:3925)
        theta1 = np.resize(theta1, (785, 5))
        theta2 = params(3925:3955)
        theta2 = np.resize(theta2, (6, 5))
        theta3 = params(3955:4015)
        theta3 = np.resize(theta3, (6, 10))


        bias = np.ones((np.shape(y)(0), 1))
        a1 = self.sig(np.dot(X, theta1))
        a1 = np.hstack((bias, a1))
        a2 = self.sig(np.dot(a1, theta2))
        a2 = np.hstack((bias, a2))
        hyp = self.sig(np.dot(a2, theta3))

        #10 classes
        pt1 = ((-np.log(hyp) * y) - (np.log(1-hyp) * (1-y))).sum()
        J = 1/(40000) * pt1.sum()

        return J


    def error(self, theta1, theta2, theta3, X, y):
        bias = np.ones((np.shape(y)(0), 1))
        a1 = self.sig(np.dot(X, theta1))
        a1 = np.hstack((bias, a1))
        a2 = self.sig(np.dot(a1, theta2))
        a2 = np.hstack((bias, a2))
        hyp = self.sig(np.dot(a2, theta3))
        print(hyp(0, :))

        #10 classes
        pt1 = ((np.log(hyp) * y) + (np.log(1-hyp) * (1-y))).sum()
        J = - (1/(40000)) * pt1.sum()

        return J



    #def validate(self):

    def test(self, theta1, theta2, theta3):
        X = self.testX
        y = self.oneHot_T
        bias = np.ones((np.shape(y)(0), 1))
        a1 = self.sig(np.dot(X, (theta1)))
        a1 = np.hstack((bias, a1))
        a2 = self.sig(np.dot(a1, (theta2)))
        a2 = np.hstack((bias, a2))
        hyp = self.sig(np.dot(a2, (theta3)))

        correct = 0
        ans = np.array((0, 1, 2, 3, 4, 5, 6, 7, 8, 9))

        for i in range(np.shape(y)(0)):
            #fix backprop and forward prop then this
            guess = np.argmax(hyp(i, :))
            match = np.argmax(y(i, :))
            print("guess: ", guess, "| ans: ", match)
            if guess == match:
                correct = correct + 1;

        return correct



    def gradientCheck(self):
        params = np.array(self.params)
        theta1 = params(0:3925)
        theta1 = np.resize(theta1, (785, 5))
        theta2 = params(3925:3955)
        theta2 = np.resize(theta2, (6, 5))
        theta3 = params(3955:4015)
        theta3 = np.resize(theta3, (6, 10))
        self.fbProp(theta1, theta2, theta3)

        grad = self.theta1G.ravel()
        grad = np.append(grad, self.theta2G.ravel())
        grad = np.append(grad, self.theta3G.ravel())


        print("got grads")
        epsilon = 0.00001

        params2 = np.array(self.params)
        check = np.zeros(np.shape(params))
        for i in range(3965, np.size(params)):
            temp = params(i)
            params(i) = params(i) + epsilon
            params2(i) = params2(i) - epsilon
            check(i) = (self.error(params, self.trainX, self.trainY) - self.error(params2, self.trainX, self.trainY)) / (2 * epsilon)
            params(i) = temp
            params2(i) = temp
            print(grad(i), " ", check(i))



    def sigG(self, z):
        return (self.sig(z) * (1-self.sig(z)))


    def sig(self, z):
        return 1/(1+(np.exp(-z)))


    def printPictures(self):
        #number of training examples to iterate over

        for i in range(3):
            img = self.trainX(i, 1:).reshape((28,28))
            mb.title('Digit = {}'.format(np.argmax(self.trainY(i,:))))
            mb.imshow(img, cmap = 'gray_r')
            mb.show()



    def __init__(self, steps, alpha, nodes, h_layers):
        self.steps = steps
        self.alpha = alpha
        self.nodes = nodes
        self.h_layers = h_layers



obj = MNIST(100, 0.1, 5, 1);
obj.processImg();
obj.setThetas();
obj.gradDescent()

#obj.gradientCheck()
#obj.printPictures()

python – improved speed of this diffraction calculator based on numpy

I will simulate diffraction patterns of a normal incident Gaussian profile beam from a 2D network of point diffusers with a height distribution.

The 2D table of diffuser positions X, Y and Z each has a size N x N and these are summarized in each call to E_ab(a, b, positions, w_beam). It's done M x M times to build the diffraction model.

If I estimate ten floating point operations per dispersion site per pixel and one nanosecond per flop (which my laptop does for numpy small matrices), I would expect time to be 10 M^2 N^2 1E-09 seconds. For the small N, this runs a factor of 50 or 100 slower than that, and for the large N (bigger than say 2000), it slows down even more. I guess it has something to do with the pagination of the large paintings in memory.

What can i do to increase the speed of fat N?

Note: Right now, the height variation Z is random, in the future I plan to also include an additional systematic pitch variation term, so even if the purely Gaussian variation might have an analytical solution, I have to do it numerically.


Since I randomly distribute the height of pZ here, the plots will be a little different each time. My output (run on a laptop) is this, and I can't even begin to understand why it takes longer (~ 16 seconds) when w_beam is small only when it is large (~ 6 seconds).

My estimator 10 M^2 N^2 1E-09 suggests 0.25 seconds, these are about 50 times slower, so there can be substantial room for improvement.

1 16.460583925247192
2 14.861294031143188
4 8.405776023864746
8 6.4988932609558105

Python script:

import numpy as np
import matplotlib.pyplot as plt
import time

def E_ab(a, b, positions, w_beam):
    X, Y, Z = positions
    Rsq = X**2 + Y**2
    phases = k0 * (a*X + b*Y + (1 + np.sqrt(1 - a**2 - b**2))*Z)
    E = np.exp(-Rsq/w_beam**2)  * np.exp(-j*phases)
    return E.sum() / w_beam**2 # rough normalization

twopi, j = 2*np.pi, np.complex(0, 1)

wavelength = 0.08
k0  = twopi/wavelength

z_noise = 0.05 * wavelength

N, M = 100, 50
x = np.arange(-N, N+1)
X, Y = np.meshgrid(x, x)
Z = z_noise * np.random.normal(size=X.shape) # use random Z noise for now
positions = (X, Y, Z)

A = np.linspace(0, 0.2, M)

answers = ()
for w_beam in (1, 2, 4, 8):
    E = ()
    tstart = time.time()
    for i, a in enumerate(A):
        EE = ()
        for b in A:
            e = E_ab(a, b, positions, w_beam)
            EE.append(e)
        E.append(EE)
    print(w_beam, time.time() - tstart)
    answers.append(np.array(E))

if True:
    plt.figure()
    for i, E in enumerate(answers):
        plt.subplot(2, 2, i+1)
        plt.imshow(np.log10(np.abs(E)), vmin=0.0001)
        plt.colorbar()
    plt.show()

enter description of image here

algorithm – Efficiently move items to the back of a python / numpy array

I'm looking for a way to make this code more efficient, using any way possible.

In (1): def move_to_back(l, value):
   ...:     total_count = l.count(value)
   ...: 
   ...:     i = 0
   ...:     while i != total_count:
   ...: 
   ...:         if l(i) == value:
   ...:             l.pop(i)
   ...:             l.append(value)
   ...: 
   ...:         i += 1
   ...:     return l
   ...:

In (2): l = (24,24,24,1,2,5,2)

In (3): move_to_back(l,24)
Out(3): (1, 2, 5, 2, 24, 24, 24)

In (4): %timeit move_to_back(l, 24)
427 ns ± 5.32 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

If anyone knows how this could be improved in any way (using external libraries is 100% right for me), this would be greatly appreciated.

python – Numpy – how to assign a value to indices of different dimensions?

Suppose I have a matrix and some clues

a = np.array(((1, 2, 3), (4, 5, 6)))
a_indices = np.array(((0,2), (1,2)))

Is there an effective way to perform the following operation?

for i in range(2):
    a(i, a_indices(i)) = 100

# a: np.array(((100, 2, 100), (4, 100, 100)))