authentication – Suddenly failing to authenticate against SharePoint online after the weekend April 10-11

After the weekend (10-11 of April) I suddenly started failing to authenticate against multiple Sharepoint Online tenants with an app hosted in Azure using an AppId and Secret.

Through fiddling with the code i found out that raising NET framework (4.5 -> 4.6.1) and redeploying seemed to fix this for some tenants (and reverting back to 4.5 leading to error), but what i can´t figure out is why this is happening (and even some tenants seemed to not have any problems with the 4.5 versions).

I have not been able to find any news about changes other than there was a security update on the 14 of April for Sharepoint Online (so that does not align with when the problems started).

Has anyone else ran into this problem or has any idea whats happening?

The code is relying on “SharePointContext.cs” and “TokenHelper.cs” to authenticate.


postgresql insert into jsonb key failing with syntax error at or near “->>”

Running the following query auto-generated by Eloquent’s upsert function is throwing a syntax error and I’m not sure why. I couldn’t find a supporting statement that says postgres supports the following syntax, looking for some expert advice on whether this would work.

insert into "plugin_positions" ("created_at", "positions"->>"test", "slug", "tag", "updated_at") values ('2021-04-10 17:30:40', 0, 'contact-for-telegram', 'rrss', '2021-04-10 17:30:40');

Here’s the query that works (which uses the simple column name, and a valid json value):

insert into "plugin_positions" ("created_at", "positions", "slug", "tag", "updated_at") values ('2021-04-10 17:30:40', '{"test":0}', 'contact-for-telegram', 'rrss', '2021-04-10 17:30:40');

Does postgresql allow inserting into a table if we specify the column as "positions"->>"test"?

mac mini – Is my SSD failing?

I have a Mac Mini 2018 with a 2T SSD.

I’ve started to notice a few files getting corrupted, and an I/O error being reported when I try to access them.

I’ve run Disk Utility, and nothing bad shows up either running in recovery or normal mode.

The worry I have is that the SSD is degrading, but I’m not sure how to confirm my suspicion.

How do I tell if my SSD is dying? Or is there some other cause?

Google Site third party domain verification fails | The required DNS TXT record is propagated even then verification is failing

Similar question has been asked here and many other places. However, it doesn’t match my issue exactly. I am first describing the situation with screenshots, then below it I mention my question and then further I have provided some additional information.


  1. I have bought a domain from I am using the steps and info from the following screenshot. image 1
  2. I have made both the CNAME and TXT record entry in my DNS. I ran dig MY_DOMAIN any, and I do not see the CNAME there but I can see the TXT record in the return Image 2]3. However, the verification is still failing for the TXT record method as well.


  1. Please see the following image, image3. It says that the Google didn’t find the TXT record. But as I already mentioned that I can see that in return of the dig <DOMAIN> any. Why this discrepency?
  2. I read this answer and it is talking about a potential conflict due to a CNAME record. Should, I delete CNAME record from my DNS entry as TXT is anyways visible. Will this help?

Please feel free to get back to me with questions.

Additional Info:

  1. I do not see the TXT record in return of dig <DOMAIN>. It is only visible when I use the option ​any ​with the ​dig ​command. . Can this a source of error?
  2. While adding the records, I first added the TXT and tried to verify. When it didn’t work then I tried CNAME as well.

Image descriptions:

Image 1: The white background one. It shows the Instructions on how to verify a third party domain.

Image 2: The black background one. It shows the results of dig.

Image 3: Error on Google

ssrs – Reporting & Logging Failing on SQL Server Job

System Info: SQL Server 2014 Enterprise, Windows Server 2012 R2, running on a VM

I have a Maintenance Plan set up to perform a log backup of a database every 17 minutes. The job normally runs fine without any issue, but for the last week at the same time, the job will report a failure on the last part of the Subplan:

Code: 0xC0024104     Source: Reporting Task for subplan-{9D28836F-FCA4-4795-984B-03ADE9020C03}      Description: The Execute method on the task returned error code 0x80131904 (Execution Timeout Expired.  The timeout period elapsed prior to completion of the operation or the server is not responding.). The Execute method must succeed, and indicate the result using an "out" parameter.

The Reporting & Logging is configured on this Maintenance Plan, and is to output a text file report of the backup to the same drive that the Data files are on. This is the same configuration for several other Maintenance Plans for other databases, and those have not yet reported an issue similar to the above.

What I see is there are no text files for the Subplan being created at the time this errors out, so it’s very likely the error is being generated because it cannot create a text report of the Subplan even if it ran successfully.

I am not sure how to address the issue. I have reviewed system logs, VM logs, talked with folks responsible for storage and have not found anything that would indicate a problem with talking to the local disk. I also don’t see any points of contention (so far) that may conflict with the Maintenance Plan.

ssl – Gmail failing to accept TLS

I recently set up a postfix mail server. Testing it with other domains, everything seems to work well.

However, when my server tries to send messages to gmail, they are marked as spam, with the red padlock and note did not encrypt this message

( is not my domain. However, the above is exactly what gmail says)

After forcing tls, I find that my server is unable to send messages to gmail at all. Logs state that
(TLS is required, but was not offered by host

Wait, what? Gmail certainly offers TLS!

What’s happening here?

postconf -n

alias_database = $alias_maps
alias_maps = hash:/etc/aliases
append_dot_mydomain = no
biff = no
bounce_template_file = /etc/postfix/
broken_sasl_auth_clients = yes
canonical_maps = hash:/etc/postfix/maps/canonical
command_directory = /usr/bin
compatibility_level = 2
daemon_directory = /usr/lib/postfix/bin
data_directory = /var/lib/postfix
default_destination_concurrency_limit = 5
disable_vrfy_command = yes
dovecot_destination_recipient_limit = 1
home_mailbox = Maildir/
inet_interfaces = all
inet_protocols = ipv4
local_destination_concurrency_limit = 2
mail_owner = postfix
mailbox_command = /usr/lib/dovecot/deliver -m "${EXTENSION}"
mailbox_size_limit = 0
message_size_limit = 104857600
mydestination = $myhostname
mydomain =
myhostname =
mynetworks =,
myorigin = $myhostname
queue_directory = /var/spool/postfix
readme_directory = no
recipient_delimiter = +
relay_destination_concurrency_limit = 1
smtp_tls_CAfile = /etc/ssl/cert.pem
smtp_tls_note_starttls_offer = yes
smtp_tls_security_level = may
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
smtp_tls_verify_cert_match = hostname, nexthop, dot-nexthop
smtp_use_tls = yes
smtpd_banner = $myhostname ESMTP $mail_name
smtpd_helo_required = yes
smtpd_helo_restrictions = permit_mynetworks, reject_non_fqdn_helo_hostname, reject_invalid_helo_hostname, reject_unknown_helo_hostname, permit
smtpd_recipient_restrictions = permit_mynetworks, reject_unknown_client_hostname, reject_unknown_sender_domain, reject_unknown_recipient_domain, reject_unauth_pipelining, permit_sasl_authenticated, reject_unauth_destination, reject_invalid_hostname, reject_non_fqdn_sender
smtpd_relay_restrictions = permit_mynetworks, permit_sasl_authenticated, defer_unauth_destination
smtpd_sasl_auth_enable = yes
smtpd_sasl_authenticated_header = yes
smtpd_sasl_local_domain = $myhostname
smtpd_sasl_path = private/dovecot-auth
smtpd_sasl_security_options = noanonymous
smtpd_sasl_type = dovecot
smtpd_sender_login_maps = $virtual_mailbox_maps
smtpd_sender_restrictions = permit_mynetworks, reject_unknown_sender_domain, reject_sender_login_mismatch,
smtpd_tls_CAfile = /etc/ssl/cert.pem
smtpd_tls_ask_ccert = yes
smtpd_tls_cert_file = /etc/letsencrypt/live/
smtpd_tls_ciphers = high
smtpd_tls_key_file = /etc/letsencrypt/live/
smtpd_tls_loglevel = 1
smtpd_tls_protocols = !TLSv1 !SSLv2 !SSLv3
smtpd_tls_security_level = may
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtpd_tls_session_cache_timeout = 3600s
smtpd_use_tls = yes
unknown_address_reject_code = 550
unknown_client_reject_code = 550
unknown_hostname_reject_code = 550
unknown_local_recipient_reject_code = 550
virtual_alias_maps = hash:/etc/postfix/maps/valiases
virtual_mailbox_domains = hash:/etc/postfix/maps/vmailbox-domains
virtual_mailbox_maps = hash:/etc/postfix/maps/vmailbox-users
virtual_transport = dovecot

router – UPnP keeps failing in setting up a dedicated server for Arma3 on PC

I followed a video on how to do this… using TADST for ARMA3… Each time I check the box for UPnP, it unchecks it after each use of the launcher (TADST). Also, I know my IP4, which I looked up on CMD, and that hasn’t changed. But we had to get a bigger switch for the internet, we have 5 current devices that are wired to the router (lots of PC gaming and xbox gaming along with game hosting via in game hosting ability)

When I start the server.. it eventually tells me that UPnP has failed. Could it be that two PCs are trying to port forward through the same ports? I am trying to use 2302-2306 for this dedicated server and my husband has his Arma3 in game hosting set to 2302…. could this be a problem?

Thank you.

How much damage does a creature with the Improved Evasion feature take if critically failing a Reflex Save?

A creature with Improved Evasion takes full damage after rolling a critical failure on a Reflex save against a damaging effect.

The important part here is “When you roll a failure/critical failure”, only the result you rolled matters, not what it was turned into by class abilities, spells, or whatever.

No ability will ever change your degree of success by more than one step, this was added on the errata:

Changes to the Greater Juggernaut, Greater Resolve, Improved Evasion, and Third Path to Perfection class features

All three of these abilities grant a two-tier benefit on a failed saving throw of the specified type, but (as always) no ability will ever change your degree of success by more than one step. To clarify, we’re making the following clarification to all three abilities. Change the beginning of the last sentence from “When you fail” a given saving throw to “When you roll a failure on” a giving saving throw.

The different levels of damage depending on your level of evasion for basic reflex saving throws would be:

Save Result No Evasion Evasion Improved Evasion
Critical Success No Damage No Damage No Damage
Success Half Damage No Damage No Damage
Failure Full Damage Full Damage Half Damage
Critical Failure Double Damage Double Damage Full Damage

saving throw – How much damage does a creature take if critically failing a Reflex Save with Improved Evasion?

Improved Evasion (available on at least Rogues and Swashbucklers) provides

You elude danger to a degree that few can match. Your proficiency rank for Reflex saves increases to legendary. When you roll a critical failure on a Reflex save, you get a failure instead. When you roll a failure on a Reflex save against a damaging effect, you take half damage.

How much damage does such a creature take on a critical failure? Full or half?

testing – How to avoid Ember tests failing if endpoint returns 404?

The Context:

I have an ember app, which I just updated from 2.16 to 2.18.2 (latest from the 2.x versions).
I have a stubby server for testing purposes.

The Problem

When running my tests, some endpoints returns an 404 (which is expected), but my test fails with the following log:

not ok 32 Chrome 79.0 - (undefined ms) - Global error: Uncaught Error: Ember Data Request GET returned a 404
Payload (Empty Content-Type)

What I have tried

I added mock endpoint to return some data. The problem here is adding too many and unnecessary data.

Is there any possible way to flag those 404 as expected since they are not needed for a particular test to pass?