Can I Avoid Using Complete/Pure Black Background Color For Dark Mode

I am creating a webpage that deals with toggle dark mode. I am not sure if I should avoid using complete black background color for dark mode. If not, what color should I actually impelement my dark theme.

CAN I USE MEDIUM OR GENTLE FROM THESE…

can I use either medium or gentle

postgresql – Can a transaction avoid inserting duplicate values in my database?

I have the following table named values:

id serial
user_id integer
store_id integer
identifier VARCHAR(255)
created timestamp without time zone DEFAULT NOW()

But my application has inserted following values:

id user_id store_id identifier created
1 1 1 123cdwe 2021-11-11 13:00:00
1 1 1 Ggrseza 2021-11-11 13:00:00

And that values seem wrong. What I suspect is a replication lag between master and slave alongside with an application termination right just after writing. The application is a web one and I suspect that happens whilst user ferfeshed whilst saving.

Would a transaction could solve the problem or this problem should be tackled at application level?

charts – How to avoid performance issues in user-customizable dashboards by limiting in some way the amount of information being displayed

I work for a Product that allows users to create customs dashboards. They have the possibility to create 1-25 custom charts and the available chart types are indicators, columns, bars, area, pie, and line.

The problem that we are currently facing is that some users create dashboards with dimensions that not only bring an insane amount of data which causes performance issues but also are really hard to read and analyze.

I would like to receive some advice about how to:

  • Reduce the data that we render in the charts.
  • Limit the users so they don’t create no-sense reports.

Thanks in advance,

debian – Avoid downgrade Postgres on Installing g++ package

I have a Linux server and Postgres is installed on it. I need to install some new packages like newer python versions and modules, but installing every package that need C compiler fail. I checked gcc and g++ and realized g++ is not installed ( or uninstalled )

dashboard@dashboard01:~$ gcc -v
Using built-in specs.
COLLECT_GCC=gcc
COLLECT_LTO_WRAPPER=/usr/lib/gcc/x86_64-linux-gnu/6/lto-wrapper
Target: x86_64-linux-gnu
Configured with: ../src/configure -v --with-pkgversion='Debian 6.3.0-18+deb9u1' --with-bugurl=file:///usr/share/doc/gcc-6/README.Bugs --enable-languages=c,ada,c++,java,go,d,fortran,objc,obj-c++ --prefix=/usr --program-suffix=-6 --program-prefix=x86_64-linux-gnu- --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-vtable-verify --enable-libmpx --enable-plugin --enable-default-pie --with-system-zlib --disable-browser-plugin --enable-java-awt=gtk --enable-gtk-cairo --with-java-home=/usr/lib/jvm/java-1.5.0-gcj-6-amd64/jre --enable-java-home --with-jvm-root-dir=/usr/lib/jvm/java-1.5.0-gcj-6-amd64 --with-jvm-jar-dir=/usr/lib/jvm-exports/java-1.5.0-gcj-6-amd64 --with-arch-directory=amd64 --with-ecj-jar=/usr/share/java/eclipse-ecj.jar --with-target-system-zlib --enable-objc-gc=auto --enable-multiarch --with-arch-32=i686 --with-abi=m64 --with-multilib-list=m32,m64,mx32 --enable-multilib --with-tune=generic --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=x86_64-linux-gnu
Thread model: posix
gcc version 6.3.0 20170516 (Debian 6.3.0-18+deb9u1)
dashboard@dashboard01:~$ g++ -v
-bash: g++: command not found

I tried to install g++ but it failed.

dashboard@dashboard01:~$ sudo apt-get install g++
(sudo) password for dashboard:
Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 g++ : Depends: g++-6 (>= 6.3.0-9~) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.

There is an unlimited chain of dependencies!

So I tried aptitude command and here is output:

dashboard@dashboard01:~$ sudo aptitude install g++
The following NEW packages will be installed:
  g++ g++-6{a} libc-dev-bin{ab} libc6-dev{ab} libstdc++-6-dev{a} linux-libc-dev{a}
0 packages upgraded, 6 newly installed, 0 to remove and 2 not upgraded.
Need to get 12.7 MB of archives. After unpacking 61.1 MB will be used.
The following packages have unmet dependencies:
 libc6-dev : Depends: libc6 (= 2.24-11+deb9u4) but 2.28-8 is installed
 libc-dev-bin : Depends: libc6 (< 2.25) but 2.28-8 is installed
The following actions will resolve these dependencies:

     Keep the following packages at their current version:
1)     g++ (Not Installed)
2)     g++-6 (Not Installed)
3)     libc-dev-bin (Not Installed)
4)     libc6-dev (Not Installed)
5)     libstdc++-6-dev (Not Installed)



Accept this solution? (Y/n/q/?) n
The following actions will resolve these dependencies:

      Remove the following packages:
1)      libpython3.7-minimal (3.7.2-3 (now))
2)      libpython3.7-stdlib (3.7.2-3 (now))
3)      postgresql-11 (11.2-2 (now))
4)      postgresql-client-11 (11.2-2 (now))

      Install the following packages:
5)      postgresql-9.6 (9.6.23-0+deb9u1 (oldoldstable))
6)      postgresql-client-9.6 (9.6.23-0+deb9u1 (oldoldstable))
7)      postgresql-contrib-9.6 (9.6.23-0+deb9u1 (oldoldstable))
8)      sysstat (11.4.3-2 (oldoldstable))

      Downgrade the following packages:
9)      libc-bin (2.28-8 (now) -> 2.24-11+deb9u1 (oldoldstable))
10)     libc6 (2.28-8 (now) -> 2.24-11+deb9u4 (oldoldstable))
11)     libgssapi-krb5-2 (1.17-2 (now) -> 1.15-1+deb9u2 (oldoldstable))
12)     libk5crypto3 (1.17-2 (now) -> 1.15-1+deb9u2 (oldoldstable))
13)     libkrb5-3 (1.17-2 (now) -> 1.15-1+deb9u2 (oldoldstable))
14)     libkrb5support0 (1.17-2 (now) -> 1.15-1+deb9u2 (oldoldstable))
15)     libpq5 (11.2-2 (now) -> 9.6.23-0+deb9u1 (oldoldstable))
16)     libssl1.1 (1.1.1b-1 (now) -> 1.1.0l-1~deb9u3 (oldoldstable))
17)     locales (2.28-8 (now) -> 2.24-11+deb9u1 (oldoldstable))
18)     postgresql (11+200 (now) -> 9.6+181+deb9u3 (oldoldstable))

      Leave the following dependencies unresolved:
19)     libpython3.7-minimal recommends libpython3.7-stdlib



Accept this solution? (Y/n/q/?)

My concern is about Postgres and downgrade it is not an option.

I checked Postgres and it seem it compiled with current gcc.

dashboard=# select version();
                                                             version
----------------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 11.7 (Debian 11.7-1.pgdg90+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 6.3.0-18+deb9u1) 6.3.0 20170516, 64-bit
(1 row)

Where is gone g++ and how I can fix this dependency problem?

dnd 5e – Does the Simic Hybrid’s Manta Glide let them glide after a high jump to double their movement and avoid opportunity attacks?

Rules as written: Yeah, this seems to be the case.

The rules for High Jumps state:

When you make a high jump, you leap into the air a number of feet equal to 3 + your Strength modifier (minimum of 0 feet) if you move at least 10 feet on foot immediately before the jump. When you make a standing high jump, you can jump only half that distance.

When calculating your standing jump height, you have to factor in the “Round Down” rule from the introduction to the Player’s Handbook:

Whenever you divide a number in the game, round down if you end up with a fraction, even if the fraction is one-half or greater.

So with a strength of 14, you have a modifier of +2, your standing jump height is 2 feet, so a 2 foot fall from the apex of your jump translates to 4 feet of lateral movement. This works just fine if you are measuring distance to the foot, but it is unclear how this functions if you are playing on a fixed 5-foot grid. In this case, you will have to ask your DM.

Rules as intended: This makes for a significantly overpowered feature, and the narrative image is ridiculous.

This, to me, seems to be a bug in the feature. If we take the feature to mean “double all your movement for free”, then it is absolutely broken. It goes from being a good feature in a specific context, to being a blanket improvement in any context where movement is measured.

Further, it is even more powerful than if a feature just said “Your speed is doubled”. A racial feature that said “your speed is doubled” would already be extremely powerful, but this feature is essentially that, with the fall protection.

Further, as written, it does prevent opportunity attacks: the part of the move where you leave the creature’s reach isn’t actually using your movement, which is sort of a cherry on top of an already broken feature. This simply cannot be the proper application of the feature, because that would make it far and away the most powerful feature in the game. Compare it to the Mobile feat:

  • Your speed increases by 10 feet.
  • When you use the Dash action, difficult terrain doesn’t cost you extra movement on that turn.
  • When you make a melee attack against a creature, you don’t provoke opportunity attacks from that creature for the rest of the turn, whether you hit or not.

Mobile is generally regarded as a very good feat. Allowing Manta Glide to work as you have described makes it a vastly superior version of the Mobile feat.

Mobile increases your speed by 10 feet, Manta Glide doubles it. Mobile lets you cancel difficult terrain if you take the Dash action, Manta Glide lets you do it for free. Mobile lets you prevent opportunity attacks from a creature you attack, Manta Glide lets you prevent opportunity attacks from any creature for free.

There is no way this is the correct interpretation. Manta Glide should simply not be allowed to function this way.

Also, the mental image of a character jumping a bit then gliding a few feet as their primary locomotion pattern seems quite silly to me, but that’s mostly an opinion of mine. Maybe you think that’s cool. That’s okay. The primary issue here is that this reading of the feature is so overwhelmingly powerful that it cannot be correct.

unity – How can I avoid from the player being moving when using the mixamo idle animation instead of the humanoididle animation?

In one project the original package of my player the idle animation is Humanoididle.
With this animation the player is not moving at all I mean not changing his position while the idle animation is playing.

But in my new project I’m using mixamo idle animation and this animation make the player changing position a bit all the time very small change but you see the player is moving.

Is there a way to fix it or avoid the player from being moving with the mixamo animation ?

This is a screenshot of the player animator controller Humanoididle animation settings in the inspector of the Animation tab on the right side in the screenshot :

Humanoididle animation settings of the Animation tab in the inspector

And this screenshot is the settings of the same Humanoididle animation but the Rig tab :

Humanoididle animation settings in the inspector of the Rig tab

And this screenshot is showing the animation settings of the mixamo idle animation :

Mixamo idle animation settings in the inspector of the Animation tab

And last the Mixamo animation settings of the Rig tab :

Mixamo idle animation settings in the inspector of the tab Rig

I can see that when I’m using the miaxmo idle animation the player position values is changing mostly on the X and Z and when using the Humanoididle animation the X and Z value never change the animation is playing but the player position never change.

nikon – how to find best FX camera to avoid diffraction at small apertures

This is only possible by a system that corrects for the effects of diffraction in post-processing. For example Canon’s Digital Lens Optimizer can somewhat negate the effects of diffraction. I’m not sure if similar features are available for Nikon.

The problem here is that by fixing the sensor size to full frame (36mm x 24mm) and by fixing the resolution to some high value, and by wanting to use a small aperture, you are creating conditions that invariably cause diffraction. No lens is going to eliminate it. No camera body is going to eliminate it.

The only ways you can reduce the effect are:

  • Reduce the amount of diffraction using sophisticated algorithms in post processing
  • Use a lower resolution
  • Use a bigger aperture, however that creates shallower depth of field
  • Use a bigger format sensor, however that not only costs a lot but also creates a shallower depth of field, unless you use a higher aperture f-number in which case you are back where you started with respect to diffraction and depth of field, but you definitely are not back where you started with respect to total system cost

Focus stacking might also help if your subject doesn’t move and you can take multiple pictures using a tripod from the same position.

By the way, for landscape photography this is not a problem. For example 50 megapixels with full frame mean f/8 is the limit where diffraction becomes visible. However this means diffraction when pixel peeping, not diffraction when viewing a reasonable size of print from a reasonable distance. If you print a 1 meter sized picture that is viewed from 1 meter distance, a circle of confusion calculator gives 0.008 mm circle of confusion with perfect vision. With this circle of confusion, if you focus 40 meters away on a 50mm lens, anything from 20 meters away to infinity is in focus with f/8. And you have room to use for example f/16 if you want some nearby tree to be in perfect focus — the f/16 diffraction would probably not be visible in a 1 meter sized picture viewed from 1 meter distance with perfect vision, it would be visible only when pixel peeping. This f/16 would allow anything from 10 meters to infinity to be in focus.

powerbi – how to avoid Power BI incremental refresh duplicated query in BigQuery?

As Chris Webb’s described in his article, power BI makes two queries in order to import data from a SQL database; one that returns a limited number of rows just to discover the table schema and another to get the actual data.

In many SQL servers, this only compromise performance, but in BigQuery, where the charge is made for the amount of data processed, regardless of the number of resulting rows, this first query is costing us a lot, because query folding is not taking place.

here I have some experiments with smaller datasets to show what is happening:

Notice that the “discovery query” above is processing 121.66 MB to return just one row, while the “data query” below, where the query folding is taking place, is processing just 7.35 MB.

inserir a descrição da imagem aqui

Is there any way to avoid the “discovery query”??

amazon web services – Lock in AWS Lambda functions to avoid concurrent run of some critical section of the code

Problem:
We have an AWS Lambda function which calls 3rd party API. That 3rd party system has some concurrency issue so under some circumstances if we call that API from several instances of the lambda function, which are running at the same time, we end up duplicated objects created in 3rd party system. Vendor of that system is not promising to fix that issue quickly so for the moment we need to avoid concurrency of the section of the code which calls that API

Our Lambda function is triggered when file is dropped into S3 bucket, so when multiple files are dropped at the same time, each of these files is processed by a running an instance of this function in parallel.

For the performance reasons we would like to continue processing files concurrently, so we do not consider a solution using a queue (like SQS) to put the files in queue and then call lambda function for each of them sequentially.

What is the easiest way to write a kind of a

threading.lock()

command to block that critical section of the code that calls 3rd party API from executing in parallel instances?

We think that we can probably implement that by updating a value in DynamoDB and checking if it is locked or not, but at the moment we don’t use DynamoDb , our solution contains S3 buckets and serverless components like AWS Step Functions, Lambda functions, Glue jobs, so introducing a database just for a single purpose of controlling locks sounds like an overkill

Any thoughts are welcome

penalty – What is duplicate content and how can I avoid being penalized for it on my site?

Google’s Duplicate Content webmaster guide defines duplicate content (for purposes of search engine optimization) as “substantive blocks of content within or across domains that either completely match other content or are appreciably similar”.

Google’s guide goes on to list the following as examples of duplicate content:

  • Discussion forums that can generate both regular and stripped-down pages targeted at mobile devices
  • Store items shown or linked via multiple distinct URLs
  • Printer-only versions of web pages

Penalties

Search engines need to penalize some instances of duplicate content that are designed to spam their search index such as:

  • scraper sites which copy content wholesale
  • simplistic article spinning techniques which generate “new” content by selectively replacing words in existing content.

When search engines find duplicate content they may:

  • Penalize an entire site that contains duplicate content. (when spammy)
  • Pick a page as the canonical source of the content and lower the priority or not index the other page with the duplication. (common)
  • Take no punitive action and index multiple copies of the content (rare)

Avoiding internal duplication

When asked about duplicate content, Google’s Matt Cutts said that it should only hurt you if it looks spammy, however many webmasters employ the following techniques to avoid unnecessary content duplication:

  • Ensure that content is only accessible under one canonical URL
  • If your site must return the same content under multiple URLs (e.g. for a “print view” page) specify a canonical URL manually with a link element in the document header
  • In cases where your site returns similar content based upon parameters encoded in the URL (e.g. sorting a product catalog) exclude the URL parameters in Google Webmaster Tools

Content Syndication

Publishing content on your site that has been published elsewhere is called content syndication. Creating duplicate content through content syndication can be OK:

  • As long as you have permission to do so
  • You tell your users what the content is and where it came from
  • You link to an original source (A direct deep link to original content from the page with the copy, not just a link to the the home page of the site where the original can be found)
  • Your users find it useful
  • You have something to add to that content such that users would rather find that content on your site than elsewhere. (Commentary or critique for example.)
  • You have enough original content on your site as well (at least 50% original, but ideally 80% original)

While Google doesn’t penalize for every instance of duplicated content, even non-penalized duplicate content may not help you get visitors:

  • You are competing with all the other copies that are out there
  • Google will likely prefer the original source of the content and the most reputable copy of the content.

Google will penalize duplicate content published on your website from other sources if:

  • It appears to be scraped or stolen (especially without attribution).
  • Users don’t react well to it (especially clicking back to Google after visiting your site.)
  • There are so many copies of it out there that there is no reason to send users to your copy of it.
  • Your copy isn’t the original, most reputable, or most usable; and doesn’t have any commentary or critique.
  • Your site doesn’t have enough original content to balance all the republished content.
  • You duplicate pages so often within your own site that Googlebot has trouble crawling the full site.

Internationalization and Geo Targeting

Content localization is one area in which duplicating content can be beneficial for SEO. It is perfectly fine to publish the same content on sites targeted at different countries that speak the same language. For example you may have a US site, a UK site, and an Australian site, all with the same content.

With a site for each country, it is usually possible to rank better for users in that country. In addition, it is possible to specifically cater to users in each country with minor spelling differences, pricing in the currency of the country, or product shipping options. For more information on setting up geo-targeted websites see How should I structure my URLs for both SEO and localization?

Dealing with Content Scrapers

Other sites that steal your content and republish it without permission can occasionally cause duplicate content problems for your site. Search engines work hard to ensure that it is hard for scraper sites to benefit from duplicating your content. If a scraper site is causing problems for you, then it may be possible to get the site removed from the Google index by filing a DMCA request with Google