python – How can I optimize my Von Neumann neighborhood algorithm?

I am working on a small project that requires finding Von Neumann neighborhoods in a matrix. Basically, whenever there is a positive value in the array, I want to get the neighborhood for that value. I used an iterative approach because I don’t think there’s a way to do this in less than O(n) time as you have to check every single index to make sure all of the positive values are found. My solution is this:

is_counted = {}


def populate_dict(grid, n):
    rows = len(grid)
    cols = len(grid(0))
    for i in range(rows):
        for j in range(cols):
            if grid(i)(j) > 0:
                find_von_neumann(grid, i, j, n, rows, cols)

    return len(is_counted.keys())


def find_von_neumann(grid, curr_i, curr_j, n, rows, cols):
    #print(curr_i, curr_j)
    if n == 0:

        cell = grid(curr_i)(curr_j)

        if cell > 0:
            key = str(curr_i) + str(curr_j)
            if key not in is_counted:
                is_counted(key) = 1



    if n >= 1:

    coord_list = ()
# Use Min so that if N > col/row, those values outside of the grid will not be computed.
    for i in range(curr_i + min((-n), rows), curr_i + min((n + 1), rows)):
        for j in range(curr_j + min((-n), cols), min(curr_j + (n + 1), cols)):
            dist = abs((curr_i - i)) + abs((curr_j - j))

            if n >= dist >= -n and i >= 0 and j >= 0 and i < rows and j < cols:

                coord_list.append((i, j))

    for coord in coord_list:
        key = str(coord(0)) + str(coord(1))
        if key not in is_counted:
            is_counted(key) = 1


neighbors = populate_dict((

(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1),(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1),
(1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1),
), 2)

I went with a hash table as my data structure but I believe that currently the worst case runtime would look very similar to O(n^2). For example if N (the distance factor) is equal to the amount of rows or columns AND every single value is positive, its going to be inefficient. Am I missing something with my solution, for example could using dynamic programming possibly help cut down on the runtime or is this already a good solution?

I’m fairly new to algorithms and runtime analysis so please correct me if I messed up on anything in my own analysis of this , thanks!