python 3.x – How to install opencv on Armbian?

I am trying to execute a python script on my Orange Pi Pc with Armbian OS, I tried to download the package with apt-get install I receive a message from Unable to locate the package opencv-contrib -python3, do you have special commands to install on Armbian?

How to create a 4D matrix from a matrix vector in OpenCV c ++

Suppose I collect images / matrices of the same size, depth and channel in a vector. So these images are r*c*d each and i have m of them in my vector as follows.

vector imgs; --> there are m image paths in these, all are r*c*d resolution
vector vec;
for (auto img: imgs ){
    vec.push_back(cv::imread(img, COLOR_BGR)); //or gray. doesn't really matter

Now I want to create a 4D matrix. For example, in python np.ndarray(vec) would have given me this (assuming vec is a list). I would like the same in OpenCV c ++ but I have not found a solution for this.

I don't want to do Mat m(dims, size, type); and browse each pixel and copy the value because it is very ineffective. I would like to have a technique that will only take into account vec as 4D Mat to make it super fast. Note that I can have 100 full resolution images.

I use Opencv4.2 and c ++ on Mac.

Thanks in advance

numpy – Find pixel indices in a shape: Opencv and Python

Suppose I have a hollow, curved (and not necessarily convex) mask that I received from my pretreatment steps:

Hollow circle mask

I now want to try to select all the pixels that occur inside this shape and add them to the mask, as follows:

Filled circle mask

How can I do this in Python?

Code to generate the examples:

import cv2
import numpy as np
import matplotlib.pyplot as plt

# Parameters for creating the circle
COLOR_BLUE = (255, 0, 0)
IMAGE_SHAPE = (256, 256, 3)
CIRCLE_CENTER = tuple(np.array(IMAGE_SHAPE) // 2)(:-1)
LINE_THICKNESS = 5 # Change to -1 for example of filled circle

# Draw on a circle
img = np.zeros(IMAGE_SHAPE, dtype=np.uint8)
circle_mask = img_circle(:, :, 0)

# Show the image

opencv – How to draw a line at the points returned by the Dlib library in python?

I am starting to use the Dlib library with the Python OpenCv library to perform facial detection on images. My question is how I could draw a line between two points marked in the image. I know the points are marked using coordinates detected by the Dlib library. So I would like to know how to identify these coordinates to generate a line between points 37 and 46 (tip of the eyes, as we can see in the image below) for example.

Facial mapping of the Dlib library

python – OpenCV Each pixel in the return image of the adaptiveThreshold function is 255

I apply adaptive thresholding to a grayscale image and I would like to apply normal thresholding to the return image of this function. It doesn't work because every pixel in the return image is set to 255. I don't understand why, because imshow displays the return image from the adaptive threshold as you would expect, and it responds to parameter changes. So why is every 255 pixel and why am I unable to get results by putting this image into the normal threshold function?

I am using opencv 4.0.0.

image = cv2.imread('../photos/neptune.jpg', 0)
th2 = cv2.adaptiveThreshold(image, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 3, 2)

# doesnt matter what second parameter is.
_, thresh = cv2.threshold(th2, 200, 255, cv2.THRESH_BINARY)

python – A problem when displaying images with OpenCV?


I am using my RaspberryPi v2.1 camera module and my Jetson Nano with Python to capture images with the camera and display them by creating a sort of "video stream". But my question relates more to the use of OpenCV than to the hardware I use.
My code is as follows:

import cv2
import numpy as np
from scipy.misc import imshow
import time

def gstreamer_pipeline (capture_width=3280, capture_height=2464, display_width=1280, display_height=720, framerate=21, flip_method=0) :   
    return ('nvarguscamerasrc ! ' 
    'video/x-raw(memory:NVMM), '
    'width=(int)%d, height=(int)%d, '
    'format=(string)NV12, framerate=(fraction)%d/1 ! '
    'nvvidconv flip-method=%d ! '
    'video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! '
    'videoconvert ! '
    'video/x-raw, format=(string)BGR ! appsink'  % (capture_width,capture_height,framerate,flip_method,display_width,display_height))

if __name__ == '__main__':
    cap = cv2.VideoCapture(gstreamer_pipeline(flip_method=0), cv2.CAP_GSTREAMER)

    if cap.isOpened():
            while cap.isOpened():
                ret, frame =
                frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
        except KeyboardInterrupt:
            print("Stopped video streaming.")
        print("Unable to open camera.")

I read the capture device, apply a color conversion to RGB for each image and want to display it with cv2.imshow('frame',frame) as I saw in the tutorials. The problem is that this code will do nothing, I have no window and no error message.
The output generated by the program:

GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
   Camera index = 0 
   Camera mode  = 0 
   Output Stream W = 3264 H = 2464 
   seconds to Run    = 0 
   Frame Rate = 21.000000 
GST_ARGUS: PowerService: requested_clock_Hz=37126320
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
Gtk-Message: 21:05:04.463: Failed to load module "canberra-gtk-module"

After the loading error (which often occurs; the program is still running), it does nothing until I cancel with CTRL + C. If I replace the cv2.imshow with a simple imshow(frame) by SciPy however, it works normally and generates images, so I think the problems are more in OpenCV than in my camera.
Any idea what I'm doing wrong and how to make a constant image stream (> = 21FPS) work?

Thanks for the answers in advance!

image – Python OpenCV drawing functions blink for some reason during animation

I have tried to do ux things in openCV but for some reason I think the drawing function fails and the result is a shape that seems to blink …

Here are some details that might be helpful:

I have an 8th generation Core i5 and this program uses 10-13% of the processing power of my processor.
When setting high priority for python, the blinking decreases but does not stop.
I normally execute this code with a very sturdy set of libraries that I wrote and this makes the problem unbearable.

Here is my code:

import numpy as np
import cv2
img = np.zeros((600,600,3))
img(:,:,:) = 255
cent = (int(img.shape(0)/2), int(img.shape(1)/2))
while True:
    key = cv2.waitKey(1) & 0xFF

    img(:,:,:) = 255, cent, radius=100, color=(0,0,0), thickness=3)

    img = np.flip(img, axis=0)
    cv2.imshow("image", img)

    #img(:,:,:) = 255

    if key == ord("q"):

Thanks for the help !!

linux – Develop a program using OpenCV GPU in C ++ on Ubuntu without root

I want to compile and run a program on the Ubuntu server. The program uses OpenCV GPU libraries.

CUDA 9.0 is already installed on the server. The problem is that I do not have root access and that all the tutorials I see use it to build the OpenCV GPU. Is there a method like virtualenv in Python? (I'm not sure that they are related one to the other.)

Sorry for such a noob question. But I just do not know how to start.

opencv – Find a specific cow as part of the picture using Python

I find it hard to find and crop the A cow within the frame collected from our CCTV camera because the barn contains many cows. Only the top views of a specific cow must be recognized. We have tried matching models, but I think this does not fit the project requirements. Can any one suggest what 3.7 python codes can we use? Cow A is on the right side and the other cow is on the left side, it is not necessary to recognize it.
In addition, we proposed to use SIFT in predictive modeling using Python 3.7 in Random Forest, knn and SVM.
It's the first time I come here. I hope someone will help / guide me and that will really help our group for this project .. Thank you.

This is an example of an image frame:
this is the current result:

the current code that I've used:

import os
import cv2
import numpy as np
import glob

#empty list to store template images
#make a list of all template images from a directory
files1= glob.glob('C:UsersTest_Cow*.jpg')

for myfile in files1:
    image = cv2.imread(myfile,0)

methods = ('cv2.TM_CCOEFF', 'cv2.TM_CCOEFF_NORMED', 'cv2.TM_CCORR',
             'cv2.TM_CCORR_NORMED', 'cv2.TM_SQDIFF', 'cv2.TM_SQDIFF_NORMED')

for meth in methods:
    method = eval(meth)

#loop for matching
for tmp in template_data:
    (tH, tW) = tmp.shape(::-1);
    result = cv2.matchTemplate(test_image, tmp, method)
    min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(result)
    if method in (cv2.TM_SQDIFF, cv2.TM_SQDIFF_NORMED):
        top_left = min_loc
        top_left = max_loc
    bottom_right = (top_left(0) + tW, top_left(-1) + tH)

rect_img = test_image(top_left(1) : bottom_right(1), top_left(0) : bottom_right(0))   # modify value
cv2.imwrite(os.path.join(path, 'FR'+str(i)+'.jpg'), rect_img)

16.04 – () using OpenCV. Why is the USB camera faster than the CSI camera? (No GPU)

I use NanoPi Duo 2 for the project of acquiring images in real time.

I noticed a significant difference in performance (speed) between the use of a CSI camera and a USB camera.

The difference in performance is as follows,
Time for OpenCV ()

CSI_OV5640_Camera = ~0.04s (40 ms)
USB_Logitech_HD_C270 = ~0.009 (9 ms)

As far as I know, I understand that the NanoPi Duo2 does not have a graphics processor and that the CSI camera will be managed by the processor (identical to the USB camera).

Using $ htop The CSI and USB cameras display 100% @ one of the 4 cores.

For the background,

OpenCV 3.4.6 Generate the output

Video I/O
 - libv4l/libv4l2    NO
 - v4l/v4l2          linux/videodev2.h

$ v4l2-ctl –get-fmt-video

Format Video Capture:
        Width/Height      : 640/480
        Pixel Format      : 'YV12'
        Field             : Any
        Bytes per Line    : 960
        Size Image        : 460800
        Colorspace        : Default
        Transfer Function : Default
        YCbCr Encoding    : Default
        Quantization      : Default
        Flags             :

An interesting thing
Performance of use numpy ndarray of CSI and USB are also different

pyzbar.decode() calculation
CSI_OV5640_Camera = ~0.43s (430 ms)
USB_Logitech_HD_C270 = ~0.19s (190 ms)

I think the encoding of () is different but all I see is the same size (640×480, 3 colors) numpy ndarray with similar values.

Thank you for reading my question.