java – .bnd files have errors while compiling in IntelliJ

When I build my project in IntelliJ, I receive errors for each of my .bnd files (I have no other type of error).

I started working on a new project that works without any problems for my colleagues (who are too busy to help me do the setup).
What I have done so far:
– get the code and import it as a maven project in IntelliJ (no error)
– clean installed from the terminal (no errors)
– Build the project from IntelliJ (425 errors – about 2 per .bnd file)
– I tried to disable the OSGi plugins (dmServerSupport and Spring OSGi)
– I tried to disable OSGi inspection profile
– I tried to add the file in the compiler -> Excludes (still builds it)

What can I do to get rid of all these mistakes?

My OSGi manifest (from looks like this:

Manifest-Version: 1.0
Bundle-ManifestVersion: 2
Lot name: Goodies
Bundle-SymbolicName: com ..... osgi.goodies
Bundle-Version: 1.0.0.qualifier
Bundle-Vendor: ...
Bundle-RequiredExecutionEnvironment: JavaSE-1.7
Import-Package: org.osgi.framework; version = "1.7.0",
org.osgi.service.component.annotations; version = "1.2.0",
org.osgi.service.event; version = "1.3.0",
org.osgi.service.packageadmin; version = "1.2.0"

For example, I have this .bnd file:

-classpath: target / classes
-sannotations: *
Include-Resource: README.HTML
Private-Package: com ..... pubsub.impl
Export-Package: com ..... pubsub
-sannotations: *

For the file above, I get the following errors:

Error: osgi: [pubsub] Invalid manifest header: -classpath, pattern =[A-Za-z0-9][-a-zA-Z0-9_]+
Error: osgi: [pubsub] Invalid manifest header: -dsannotations, pattern =[A-Za-z0-9][-a-zA-Z0-9_]+

Another example:

-classpath: target / classes
-sannotations: *
Include-Resource: README.HTML
Private Package: com ..... security.thread.impl
Export-Package: com ..... security.thread
Import-Package: oracle.jdbc; version = 0, 
*

Building the project containing the .bnd file above will result in the following error:

Error: osgi: [security.thread] Exception: java.io.IOException: Invalid header field
on java.util.jar.Attributes.read (Attributes.java:406)
on java.util.jar.Manifest.read (Manifest.java:199)
to java.util.jar.Manifest.(Manifest.java:69)
at the address aQute.bnd.osgi.Builder.build (Builder.java:116)
at org.jetbrains.osgi.jps.build.BndWrapper.doBuild (BndWrapper.java:262)
to org.jetbrains.osgi.jps.build.BndWrapper.build (BndWrapper.java:192)
at org.jetbrains.osgi.jps.build.OsgiBuildSession.doBuild (OsgiBuildSession.java:211)
at org.jetbrains.osgi.jps.build.OsgiBuildSession.build (OsgiBuildSession.java:79)
at org.jetbrains.osgi.jps.build.OsmorcBuilder.build (OsmorcBuilder.java:54)
to org.jetbrains.osgi.jps.build.OsmorcBuilder.build (OsmorcBuilder.java:33)
at org.jetbrains.jps.incremental.IncProjectBuilder.buildTarget (IncProjectBuilder.java:1023)
at org.jetbrains.jps.incremental.IncProjectBuilder.runBuildersForChunk (IncProjectBuilder.java:1004)
to org.jetbrains.jps.incremental.IncProjectBuilder.buildTargetsChunk (IncProjectBuilder.java:1065)
at org.jetbrains.jps.incremental.IncProjectBuilder.buildChunkIfAffected (IncProjectBuilder.java:956)
at org.jetbrains.jps.incremental.IncProjectBuilder.access $ 500 (IncProjectBuilder.java:73)
at org.jetbrains.jps.incremental.IncProjectBuilder $ BuildParallelizer.lambda $ queueTask $ 0 (IncProjectBuilder.java:927)

Errors on the site

Expensive,
I am working on creating a website with a blogger and I am a beginner in HTML coding. This is my current website I'm still working on: https://howtoplaystation.blogspot.com/
Can you help me with this problem?
Why are the images on the right side (popular tab) cropped? How can I fix it? They are also cropped here: https://howtoplaystation.blogspot.com/2019/07/blog-post.html at the bottom (in the "YOU WILL LIKE" tab)

Thank you very much in advance!

c ++ – errors when building gcc-9.1.0 under cygwin

I received this error:

/usr/local/contrib/gcc-9.1.0/libbacktrace/pecoff.c: in the function & # 39; coff_add:
/usr/local/contrib/gcc-9.1.0/libbacktrace/pecoff.c:656:37: error: pointer of type 'void * & # 39; used in arithmetic [-Werror=pointer-arith]
656 | memcpy (& fhdr, fhdr_view.data + 4, sizeof fhdr);
| ^
/usr/local/contrib/gcc-9.1.0/libbacktrace/pecoff.c:690:22: error: pointer of type 'void * & # 39; used in arithmetic [-Werror=pointer-arith]
690 | (sects_view.data + fhdr.size_of_optional_header);
| ^
/usr/local/contrib/gcc-9.1.0/libbacktrace/pecoff.c:730:45: error: pointer of type 'void 'Used in arithmetic [-Werror=pointer-arith]
730 | str_size = coff_read4 (syms_view.data + syms_size);
| ^
cc1: all warnings are treated as errors
make[3]:
[Makefile:1190: pecoff.lo] Error 1
make[3]: Leave the directory / usr / local / contrib / gcc_build / x86_64-pc-cygwin / libbacktrace & # 39;
make2: [Makefile:973: all] Error 2
make2: Output directory – / usr / local / contrib / gcc_build / x86_64-pc-cygwin / libbacktrace & # 39;
make1:
[Makefile:19155: all-target-libbacktrace] Error 2
make1: Exit the directory / usr / local / contrib / gcc_build & # 39;
make: * [Makefile:996: all] Error 2

(ignore the numbers, they are linked, wrongly, to the URL of the ditches used below)

when make performed the following:

libtool: compile: /usr/local/contrib/gcc_build/./gcc/xgcc -B / usr / local / contrib / gcc_build /./ gcc / -B / usr / local / x86_64-pc-cygwin / bin / -B / usr / local / x86_64-pc-cygwin / lib / -ystem / usr / local / x86_64-pc-cygwin / include-system / usr / local / x86_64-pc-cygwin / sys-include -DHAVE_CONFIG_H -I. -I / usr / local / contrib / gcc-9.1.0 / libbacktrace -I /usr/local/contrib/gcc-9.1.0/libbacktrace/../include -I /usr/local/contrib/gcc-9.1. 0 / libbacktrace /../ libgcc -I ../libgcc -funwind-tables -frandom-seed = pecoff.lo -W -Wall -Write-strings -Wstrict-prototypes -Wmissing-prototypes -Wold-style-definition -Wmissing -format-attribute -Wcast-qual -Werror -g -O2 -pedantic -fomit-frame-pointer -m64 -mtune = sandybridge -march = sandybridge -c /usr/local/contrib/gcc-9.1.0/libbacktrace/pecoff .c -o pecoff.o

my version is GNU Make 4.2.1 for x86_64-unknown-cygwin and my GCC current account is 7.4.0

I've typed the configure program inside the source as follows,

                    /usr/local/contrib/gcc-9.1.0/configure --enable-static --disable-shared --with-mpfr -include = / usr / local / include / --with-mpfr-lib = / usr / local / lib / --with-mpc-include = / usr / local / include / - with-mpc-lib = / usr / local / lib / - with-gmp-include = / usr / local / include / - -with-gmp-lib = / usr / local / lib / CC = gcc CFLAGS = "- O2 -pedantic -fomit-frame-pointer -m64 -mtune = sandybridge -march = sandybridge"

; for compatibility with other configurations I've made for mpfr, mpc, and gmp dependencies sharing the same build options. –enable-static and –disable-shared let the configuration not search for shared libraries, but instead choose static libraries. Otherwise, I encounter errors when building dependencies because gmp is built on static libraries.

project on the fossil website: source gcc-9.1.0 and configure file: configuration step

Google Foobar Challenges Python Partial Test Case Errors "Queue"

I started the foobar challenge a few days ago, and I am now at the first issue of the third step, and I have the challenge "Queue to do" . The problem is that I receive partial errors in the verification process. There are 10 test cases with 2 open cases and 8 hidden cases, and I had a problem with three hidden test cases.
This is the question:

Queue to do
===========

You are almost ready to destroy the LAMBCHOP Domsday device, but the security checkpoints that protect the underlying systems of LAMBCHOP will be problematic. You managed to shoot one without triggering alarms, which is great! Except that as assistant commander Lambda, you learned that the checkpoints are about to be subject to automatic control, which means that your sabotage will be discovered and your cover stolen - unless you can fool the automatic control system.

To deceive the system, you will have to write a program that will return the same amount of security check that the guards would have after controlling all the workers. Fortunately, Commander Lambda's desire for efficiency does not allow waiting times of several hours. Guards at checkpoints have therefore found ways to speed up the rate of transmission. Instead of checking every incoming worker, the guards check all people online while noting their security ID, and then let the line fill up. Once done, they go back to the line, this time leaving the last worker. They continue as well, leaving an extra worker on the line each time but saving the security IDs of those they check, until they jump the entire line, then they record the IDs of all the workers that they wrote down in a checksum and then leave for lunch. Fortunately, the orderly nature of the workers forces them to always align in a numerical order without gaps.

For example, if the first operator of the line has ID 0 and the security checkpoint line contains three employees, the process will look like this:
0 1 2 /
3 4/5
6/7 8
where the checksum XOR (^) of the guards is 0 ^ 1 ^ 2 ^ 3 ^ 4 ^ 6 == 2.

Similarly, if the first operator is identified by ID 17 and the checkpoint contains four, the process would look like this:
17 18 19 20 /
21 22 23/24
25 26/27 28
29/30 31 32
which produces the checksum 17 ^ 18 ^ 19 ^ 20 ^ 21 ^ 22 ^ 23 ^ 25 ^ 26 ^ 29 == 14.

All worker IDs (including the first worker) are between 0 and 200,000,000 inclusive, and the checkpoint line will always have a minimum length of 1 worker.

With this information, write a feature solution (start, length) that will cover the missing security checkpoint by generating the same checksum that the guards would normally submit before lunch. You just have time to find the ID of the first operator to check (start) and the length of the line (length) before the automatic review. Your program must therefore generate the appropriate checksum with these two values ​​only.

Languages
=========

To provide a Java solution, edit Solution.java
To provide a Python solution, edit solution.py

Test case
==========
Your code must pass the following tests.
Note that it can also be run on hidden test cases not shown here.

- Java case -
Contribution:
Solution.solution (0, 3)
Exit:
2

Contribution:
Solution.solution (17, 4)
Exit:
14

- Python cases -
Contribution:
solution.solution (0, 3)
Exit:
2

Contribution:
solution.solution (17, 4)
Exit:
14

TEST1

I have tested my first code:

def solution (start, length):
checksum = 0
for j in the beach (length):
for i in the range (length-j):
checksum ^ = (start + j * length + i)
return the checksum

This code has returned three hidden test errors: Test # 5, # 6, # 9.

TEST2

I have therefore tried to reconstruct the answer using a table.

def solution (start, length):
checksum = 0
strin = []
    for j in the beach (length):
for i in the range (length-j):
strin.append (start + j * length + i)
for i in strin:
checksum ^ = i
return the checksum

But now, this one throws me another failure of test case in test n ° 3, n ° 5, n ° 6, n ° 9

I would like to know why this difference in Test # 3 was made. And if possible, I would like to know what would be the cause of failures in other test cases (# 5, # 6, # 9).

filesystems – Force ZFS to ignore checksum errors without removing the offending disk

I'm using a pool with USB drives on a non-critical server with non-critical data, and I do not care to know if it's corrupt. I'm trying to configure it so that ZFS does not forcefully remove USB sticks when they encounter checksum errors (like how ext4 or FAT handle this scenario by not noticing data loss).

Warning:

For readers who arrive here via Google trying to repair their ZFS pool, do not try anything that is described in this question or its answers, you will lose your data!

Because the ZFS font loves to shout against USB key users
or have any other non-standard configuration: in the interest of this discussion,
let's assume that these are cat videos that I've saved in 32 other physically
remote locations on 128 redundant SSDs. I fully recognize that I will probably lose 100% of
my unrecoverable data on this pool (multiple times) if i try to do it.
I'm asking this question to people who are curious about
just what a bad an environment that ZFS is able to execute (people
who like to push the systems to their breaking points and beyond, just for
fun). Address this question with a sense of humor!

So here is the configuration:

  • HP EliteDesk Server under FreeNAS-11.2-U5
  • 2x 8 TB WD Elements discs connected via USB 3.0
  • Unreliable power environment, the server and drives are often forced to reboot / disconnect without warning. (yes I have an inverter, no I do not want to use it, I want to break this server, have not you read the warning?)
  • a mirror pool Hard disk with both readers (with failmode = continue together)
  • a drive is stable, even after several reboots and forced disconnections, it never seems to report checksum errors or other problems in ZFS
  • a reader is not reliable, with occasional checksum errors during normal operation (even if it is not disconnected unexpectedly), the errors seem to be not related to the bad environment of As they will function properly for more than 10 hours and will be suddenly ejected from the pool. because of the checksum's failures

I have confirmed that the unreliable drive is due to a software or hardware problem with the USB bus on the server, and not an unreliable cable or a physical problem related to the drive. The way I confirmed this is to plug it into my MacBook with USB ports in good condition, then to zero, then write random data to the entire drive and to check them (done 3 times, 100% success each time). The disk is almost new, no other SMART indicator is less than 100% health. However, even if the player failed progressively and lost a few bits here and there, I agree with that.

Here is the problem:

Whenever the failed drive has checksum errors, ZFS removes it from the pool. Unfortunately, FreeNAS does not allow me to add it to the pool without having to physically reboot or unplug and reconnect the USB cable. and the player's power supply. This means that I can not script the process of re-adding or remotely doing it without restarting the entire server, I should be physically present to unplug things or have an Arduino connected to the Internet and a wired relay in both cables.

Possible solutions

I've already done quite a bit of research to find out if this sort of thing is possible, and it's difficult because whenever I find a relevant thread, the font of data integrity intervenes and convinces the applicant to abandon his unreliable configuration instead of ignoring mistakes or working around them. I resort to the request here because I have not been able to find documentation or other answers on how to accomplish that.

  • disable checksums entirely with zfs set checksum = off hddI have not done it yet because I would ideally like to keep checksums so that I know if the player is working badly, I just want to ignore the failures.
  • an indicator that retains the checksum but ignores checksum errors / attempts to repair them without removing the drive from the pool
  • a ZFS flag that raises the maximum checksum error limit allowed before removing the drive (currently, the drive is started after about 13 errors)
  • a FreeBSD / FreeNAS command that allows me to force the device online after it's been removed, without having to restart the entire server
  • a FreeBSD / FreeNAS kernel option to force this drive to never be allowed to be deleted
  • a FreeBSD sysctl option that magically solves the problem of the USB bus causing errors / delays on this drive only (unlikely)

I'm really trying to avoid having to resort to ext4 or some other file system that does not necessarily delete drives after USB errors, because I want to use all the other ZFS features, such as snapshots, datasets, shipments / recordings, etc. just try to disable the verification of data integrity.

Relevant journals

This is the dmesg output whenever the reader misbehaves and is deleted

7 Jul 04:10:35 freenas-lemon ZFS: vdev state changed, pool_guid = 13427464797767151426 vdev_guid = 11823196300981694957
July 7 04:10:35 freenas-lemon ugen0.8:  at usbus0 (offline)
7 July 04:10:35 freenas-lemon umass4: at uhub2, port 20, addr 7 (offline)
July 7 04:10:35 freenas-lemon da4 on the bus umass-sim4 4 target scbus7 0 mon 0
July 7 04:10:35 freenas-lemon da4:  s / n 5641474A4D56574C detached
7 July 04:10:35 freenas-lemon (da4: umass-sim4: 4: 0: 0): a destroyed periphery
7 Jul 04:10:35 freenas-lemon umass4: detached
7 July 04:10:46 freenas-lemon usbd_req_re_enumerate: addr = 9, the address definition failed! (USB_ERR_IOERROR, ignored)
7 July 04:10:52 freenas-lemon usbd_setup_device_desc: getting device descriptor to adr 9 failed, USB_ERR_TIMEOUT
7 July 04:10:52 freenas-lemon usbd_req_re_enumerate: addr = 9, the address definition failed! (USB_ERR_IOERROR, ignored)
7 July 04:10:58 freenas-lemon usbd_setup_device_desc: getting device descriptor to adr 9 failed, USB_ERR_TIMEOUT
7 July 04:10:58 freenas-lemon usb_alloc_device: failure to select the configuration index 0: USB_ERR_TIMEOUT, port 20, addr 9 (ignored)
July 7 04:10:58 freenas-lemon ugen0.8:  at usbus0
July 7 04:10:58 freenas-lemon ugen0.8:  at usbus0 (offline)

errors in linux usb installation

my first mistake is & # 39; uefi table not found & # 39; then & # 39; s size not found & # 39;

second error is the USB key becomes unusable can not reformat in osx

i've used galena, unetbootin, and mac linus boot

Many worker_connections errors in the nginx log

I have a website mounted on a VPS running with CloudFlare and I've received many 520 errors (the web server returns an unknown error) and 525 (the SSL handshake failed) . When reviewing the logs of my server, I found many errors in nginx. The error is as follows:

2019/07/03 15:24:25 [alert] 6782 # 0: 2048 worker_connections are not enough

This error appears in the log up to 17 times per minute. The nginx configuration file is as follows:

    #user nginx;
worker_processes 1;

#error_log /var/log/nginx/error.log;
#error_log /var/log/nginx/error.log notice;
#error_log /var/log/nginx/error.log info;

#pid /var/run/nginx.pid;

include /etc/nginx/modules.conf.d/*.conf;

events {
worker_connections 2048;
}


http {
include mime.types;
default_type application / octet-stream;

#log_format main $ remote_addr - $ remote_user [$time_local] "$ request" & # 39;
# $ Status $ body_bytes_sent "$ http_referer" & # 39;
# "# Http_user_agent" "$ http_x_forwarded_for" & # 39 ;;

#access_log /var/log/nginx/access.log main;

sendfile on;
#tcp_nopush on;

#keepalive_timeout 0;
keepalive_timeout 65;
#tcp_nodelay on;

#gzip on;
#gzip_disable "MSIE [1-6]. (?!. * SV1) ";

server_tokens off;

include /etc/nginx/conf.d/*.conf;
}

# replace global settings, for example. worker_rlimit_nofile
include / etc / nginx / * global_params;

Initially the value of connexions_employé was 1024 and I raised it to 2048 but the problem remains the same.

The server has 4 GB of RAM and CPU with 4 cores 2.1GHz / core and nginx works with apache, configured by the Plesk panel.

My question is, how do you recommend me to solve this problem? I have searched the web and I have not found an answer that convinces me. I am not sure of the maximum recommended values ​​that I can define processus_ouvrier andconnexions_employé taking into account the characteristics of the server.

usability – What is the best practice for displaying errors when parsing a large csv file?

I design the necessary experience to download and validate a large CSV file, which can hold up to 20,000 lines.

This would really help if there was a way to show the user all the line numbers that have errors, as well as the error is.
It does not matter if there are 10 to 15 errors, but what happens if there are more than 1000 lines that contain errors?

Currently, I plan to solve this problem by not mentioning the wrong line numbers at all, but simply by indicating all possible types of errors that may appear in the file. See below –

enter the description of the image here

What is the best way to show errors for large files and when there can be thousands, and make errors as useful and descriptive as possible for users?

google sheets – Fix errors in a series of FILTER functions between table brackets?

  • until both queries / filters show something, then everything is fine:

    0

  • However, if any of these queries / filters have nothing to display, it will come out #N / ANo matches are found in the QUERY / FILTER assessment. – the problem is that #N / A is only in the 1st cell:

    e

  • but array expects the matrix on both sides to be the same (4 columns of the two queries / filters):

    0

  • so we pack each query into IFERROR and in case of error, we produce a false line with 4 false columns – {"", "", "", ""} – who will deceive the board to release it as:

    0

  • try it like this:

= {IFERROR (SORT (FILTER (Sheet1! $ AH $ 13: $ AI $ 62, Sheet1! $ AJ $ 13: $ AJ $ 62 = B5), 2,1), {"None today," ""});
IFERROR (SORT (FILTER (Sheet1! $ AK $ 13: $ AL $ 62, Sheet1! $ AM $ 13: $ AM $ 62 = B5), 2,1), {"None today", "" });
IFERROR (SORT (FILTER (Sheet1! $ AN $ 13: $ AO $ 62, Sheet1! $ AP $ 13: $ AP $ 62 = B5), 2,1), {"None today", "" });
IFERROR (SORT (FILTER (Sheet1! $ AQ $ 13: $ AR $ 62, Sheet1! $ 13 $: $ AS $ 62 = B5), 2,1), {"None today", ""} );
IFERROR (SORT (FILTER (Sheet1! $ AT $ 13: $ AU $ 62, Sheet1! $ AV $ 13: $ AV $ 62 = B5), 2,1), {"None today", "" });
IFERROR (SORT (FILTER (Sheet1! $ AW $ 13: $ AX $ 62, Sheet1! $ AY $ 13: $ AY $ 62 = B5), 2,1), {"None today", "" });
IFERROR (SORT (FILTER (Sheet1! $ AZ $ 13: $ BA $ 62, Sheet1! $ BB $ 13: $ BB $ 62 = B5), 2,1), {"None today", "" });
IFERROR (SORT (FILTER (Sheet1! $ BC $ 13: $ BD $ 62, Sheet1! $ BE $ 13: $ BE $ 62 = B5), 2,1), {"None today", "" });
IFERROR (SORT (FILTER (Sheet1! $ BF $ 13: $ BG $ 62, Sheet1! $ BH $ 13: $ BH $ 62 = B5), 2,1), {"None today", "" });
IFERROR (SORT (FILTER (Sheet1! $ BI $ 13: $ BJ $ 62, Sheet1! $ BK $ 13: $ BK $ 62 = B5), 2,1), {"None today", "" });
IFERROR (SORT (FILTER (Sheet1! $ BL $ 13: $ BM $ 62, Sheet1! $ BN $ 13: $ BN $ 62 = B5), 2,1), {"None today", "" });
IFERROR (SORT (FILTER (Sheet1! $ BO $ 13: $ BP $ 62, Sheet1! $ BQ $ 13: $ BQ $ 62 = B5), 2,1), {"None today", "" });
IFERROR (SORT (FILTER (Sheet1! $ BR $ 13: $ BS $ 62, Sheet1! $ BT $ 13: $ BT $ 62 = B5), 2,1), {"None today", " "});
IFERROR (SORT (FILTER (Sheet1! $ BU $ 13: $ BV $ 62, Sheet1! $ BW $ 13: $ BW $ 62 = B5), 2,1), {"None today", "" });
IFERROR (SORT (FILTER (Sheet1! $ BX $ 13: $ BY $ 62, Sheet1! $ BZ $ 13: $ BZ $ 62 = B5), 2,1), {"None today", "" });
IFERROR (SORT (FILTER (Sheet1! $ CA $ 13: $ CB $ 62, Sheet1! $ CC $ 13: $ CC $ 62 = B5), 2,1), {"None today", "" });
IFERROR (SORT (FILTER (Sheet1! $ CD $ 13: $ CE $ 62, Sheet1! $ CF $ 13: $ CF $ 62 = B5), 2,1), {"None today", "" });
IFERROR (SORT (FILTER (Sheet1! $ CG $ 13: $ CH $ 62, Sheet1! $ 13 $: CI $ 62 = B5), 2,1), {"None today", ""}) ;
IFERROR (SORT (FILTER (Sheet1! $ CJ $ 13: $ CK $ 62, Sheet1! $ CL $ 13: $ CL $ 62 = B5), 2,1), {"None today", "" });
IFERROR (SORT (FILTER (Sheet1! $ CM $ 13: $ CN $ 62, Sheet1! $ CO $ 13: $ CO $ 62 = B5), 2,1), {"None today", "" });
IFERROR (SORT (FILTER (Sheet1! $ CP $ 13: $ CQ $ 62, Sheet1! $ CR $ 13: $ CR $ 62 = B5), 2,1), {"None today", "" });
IFERROR (SORT (FILTER (Sheet1! $ CS $ 13: $ CT $ 62, Sheet1! $ 13 $: 62 CU $ = B5), 2,1), {"None today", ""}) ;
IFERROR (SORT (FILTER (Sheet1! $ CV $ 13: $ CW $ 62, Sheet1! $ CX $ 13: $ CX $ 62 = B5), 2,1), {"None today", "" });
IFERROR (SORT (FILTER (Sheet1! $ CY $ 13: $ CZ $ 62, Sheet1! $ DA $ 13: $ DA $ 62 = B5), 2,1), {"None today", "" });
IFERROR (SORT (FILTER (Sheet1! $ DB $ 13: $ DC $ 62, Sheet1! $ DD $ 13: $ DD $ 62 = B5), 2,1), {"None today", "" });
IFERROR (SORT (FILTER (Sheet1! $ FROM $ 13: $ DF $ 62, Sheet1! $ DG $ 13: $ DG $ 62 = B5), 2,1), {"None today", "" })}

plugins – Woocommerce functions in a custom class, avoid errors

How to call Woocommerce functions in a custom class?

I have the following class method that is called on a cronjob action

here how to save my cron action

register_activation_hook (__ FILE__, array ('Ratings  Admin  MailJobs', & # 39; activateMailJobs & # 39;));

and here is the hole class

Namespace  Admin;

use Ratings  Email  CronMails as mail;

/ **
* MailJobs class
* @package Ratings  Admin
* /
MailJobs class
{

private options $;


/ **
* MailJobs constructor.
* /
public service __construct ()
{
if (in_array ('woocommerce / woocommerce.php'), apply_filters ('active_plugins', get_option (& # 39; active_plugins & # 39;))))) {
add_action (& # 39; ausg_daily_event & # 39;) array ($ this, & quot; dailyAusgMails & # 39;));
$ this-> dailyAusgMails ();
}
}

public function init ()
{
if (in_array ('woocommerce / woocommerce.php'), apply_filters ('active_plugins', get_option (& # 39; active_plugins & # 39;))))) {
add_action (& # 39; ausg_daily_event & # 39;) array ($ this, & quot; dailyAusgMails & # 39;));
}
}

/ **
* Enable Mail CronJobs
* /
public function activateMailJobs ()
{
if (! wp_next_scheduled (& # 39; ausg_daily_event & # 39;)) {
wp_schedule_event (strtotime (23: 59: 00), "daily", "ausg_daily_event");
}
}

/ **
* disable Mail CronJobs
* /
protected function deactivateMailJobs ()
{
wp_clear_scheduled_hook (& # 39; ausg_daily_event & # 39;);
}

/ **
* send emails on CronJob
* /
protected function dailyAusgMails ()
{
add_action (& # 39; woocommerce_after_register_post_type & # 39 ;, array ($ this, & # 39; getPaidOrders & # 39;));
}

/ **
* @return  stdClass |  WC_Order[]
     * /
public function getPaidOrders ()
{

/ **
* $ orders = $ this-> getPaidOrders ();
var_dump ($ orders);
die;
if ($ orders) {
foreach ($ orders as $ orders) {
Mailer :: send ();
}
}
* /


$ targetDate = strtotime (& # 39; - 3 days & # 39;);
$ targetDateObject = date (Y-m-d, $ targetDate);

$ args = array (
& # 39; limit & # 39; => -1,
& # 39; status & # 39; => array (& # 39; wc-complete & # 39;),
& # 39; like & # 39; => array (& # 39; shop_order & # 39;),
// date_paid & # 39; => $ targetDateObject. & # 39; ... & # 39; $ targetDateObject
)

$ orders = wc_get_orders ($ args);

// var_dump (count ($ orders));
// wp_die ();
return the orders $;
}
}

who fails with the following error Fatal Error: Error Not Recovered: Calling Undefined Function Ratings Admin wc_get_orders (