cloud computing – Web service based data access or reporting in Analytics world

May I know what all reporting tools offer webservice based data access and report generation facility in Analytics era. Why I am asking this- Direct data access or sql based data access is not recommended from onprem or private clod network to a DBCS on another cloud due to security reasons. web service based data access and data load/extract is popular way. It is so because APIs lile REST API is having defined input and output format, follows secure protocol and secure model and hence widely used way of data access over cloud. I know few reporting tools which offer web service based data access like BI Answer app in OBIEE, however for other app like BI Publisher and Visual analyzer it is not available as desired (still lot of work to be done for these apps).

I thought to connect with SMEs through this platform to know any reporting tool in market for web service based data access and reporting needs.

Thanks,

Dashboard based on Todays Date – From a table

I have a list of students with responsibilities and I would like to create a dashboard in Google Sheets with information about the current month.

As an example:

enter image description here

then in September 2020, I want the dashboard to show:

enter image description here

… and also update automatically to Jose and Gina once we are in Oct 2020

Here is the demo sheet: https://docs.google.com/spreadsheets/d/113ZAUJgbzgtjEGai9LkM22h-AEyaPymxUHMnu-RLYEw/edit?usp=sharing

Thanks for any help!

The Free Forex Robot Is Based On The Heiken Ashi Indicator

The Free Forex Robot is based on the Heiken Ashi indicator. This indicator draws on the chart bull and bears candles what indicator uptrend or downtrend. After the first bullish candle, EA closed all opened sell trades and opened new buy trade. After the first bearish candle, EA closed all opened buy trades and opened new sell trade. You can set buy/sell signals by waiting for two, three or more candles. The number of candles necessary to generate a new signal you can define by the TradeStartBar param in EA. Download for Free this and other Free Forex Robots https://www.bestfxtools.com

8 – Dynamic node field based on terms in a vocabulary

I’m hoping there is a module for this, but happy to get dirty in code if need be:

I have a vocabulary “Currency” which has “USD”, “HKD”, “GBP” as terms.

On my product node I want to have fields for “USD Price”, “HKD Price”, “GBP Price” dynamically generated from the terms in the currency vocabulary. What is the best way to achieve this?

The only ideas I can think of is to add the fields manually, or create a paragraph with a field for term reference and a field for price. The first option I don’t want to do, and the second is considerable more work for the editor.

Any ideas welcome!

JavaScript syntax: bucketing based on days (today, yesterday and past)

I am trying to create three buckets of objects. Today, yesterday and past. I am wondering can the following be done in a more concise way in pure JavaScript without any libraries?

const (todayBucket, yesterdayBucket) = (0, 1).map(offset =>
  recordings.filter(({ createdOn }) =>
    moment(createdOn).isSame(moment().add(-offset), 'day'),
  ),
);
const pastBucket = recordings.filter(
  recording => !(...todayBucket, ...yesterdayBucket).includes(recording),
);

finder – How do I transfer all the contents of a file based on a criteria to another folder?

I solved it by writing a python script. Was just looking to see if Finder had any builtin functionality to do it.

EDIT: Added in the code!

import shutil
import os 
import time 
import sys
import concurrent.futures

path_HDD = "/Volumes/HDD/frames/"
path_10 = "/Volumes/HDD/frames_final/10"
path_0 = "/Volumes/HDD/frames_final/0"
path_5 = "/Volumes/HDD/frames_final/5"

def move(path):
    for root, files, directory in os.walk(path, topdown=False):
            if(len(root) == 36 or len(root) == 37):
                if(len(root) == 36 and int(root(-1)) == 0):
                    for r, d, f in os.walk(root, topdown=False):
                        for fi in f:
                            file_path = os.path.join(r, fi)
                            try:
                                shutil.copy(file_path, path_0)
                            except OSError as error:
                                print(error)
                                break
                elif(len(root) == 36 and int(root(-1)) == 5):
                    for r, d, f in os.walk(root, topdown=False):
                        for fi in f:
                            file_path = os.path.join(r, fi)
                            try:
                                shutil.copy(file_path, path_5)
                            except OSError as error:
                                print(error)
                                break
                elif(len(root) == 37 and int(root(-2:)) == 10):
                    for r, d, f in os.walk(root, topdown=False):
                        for fi in f:
                            file_path = os.path.join(r, fi)
                            try:
                                shutil.copy(file_path, path_10)
                            except OSError as error:
                                print(error)
                                break
    print("DONE")
    sys.exit(0)


if __name__ == "__main__":
    try :
        with concurrent.futures.ProcessPoolExecutor():
            move(path_HDD)
    except OSError as error:
        print(error)


design – Trying to figure out the optimal selection based on a set of rules

Background: We have software that displays different products to the user

Problem: With a given set of rules, determine which is the primary product we should show the user. These are images. We are writing this in Python, but I don’t think that matters.

Rules:

These don’t really matter so I’m just going to abbreviate them. I can calculate a boolean return value for each of these. What matters is that some must match completely, some must match in an order of precedence.

Rule    Notes
Rule_1   The image HAS to have this.
Rule_2   The image HAS to have this.
Rule_3   The image HAS to have this.
Rule_4   (see the next two for actual rule) The image must match one of the following:
  Rule_4a   If image is this, use it
  Rule_4b   Otherwise, use this
Rule_5   The image must match one of the following in order (descending preference)
  Rule_5a   If this passes, then this image could be used
  Rule_5b   If this is a different type, lets say a "front" image, use this
  Rule_5c   If this is another type, "back" of the product image.
Rule_6  This image must be at least this size

What I’m thinking:

I’m thinking of iterating through the list of images, and attaching a score to each one and then returning the highest score.

Problem with that:

I’m not sure how to handle the priority order (let me know if that is making sense). Best way I can state it is that some of these are && and some of these are ||, but I don’t know how to do || in precedence like this.

sharepoint online – Dynamically filter Document Library based on List tiles

For an OOTB modern site page, I know you can dynamically filter the contents of a Document Library part, based on the filter value of a user’s selection in a separate List part. You can also customise the List part with JSON to appear as fancy tiles. I’d like the user to click the tile containing their filter value, then see the document library dynamically filter.

How do you use the JSON to specify the List tile’s selection action? Making clicking the tile similar to selecting the radial button in a list view?

At the moment, my list tiles don’t do anything when clicked. They just change colour when hovered, and look nice. Using the tile view essentially breaks the connected filtering of the Document Library.

Here’s the code i borrowed to make my tiles.

https://github.com/pnp/sp-dev-list-formatting/blob/master/view-samples/generic-tile-format/tile-view.json

probability – Strategy for finding which of the 2 coins is more biased based on tossing a total of 100 times

There are two variants of this problem that I want to solve. (1) One coin is fair. Other is biased towards heads. (2) Both coins are biased towards heads.

You have 100 tosses. What is your strategy to determine which coin is biased in (1), and which coin is more biased than the other in (2)?

For (1), the most obvious solution that I can think of is toss a single coin 100 times. If the number of heads and tails comes out to be about even, then we conclude that the other coin is the biased coin. For (2), we toss each coin 50 times, and claim that the coin that resulted in the higher proportion of heads is the more biased coin. Is this correct?

This seems like a very simplistic way to think about the problem if this is the correct strategy.

I also don’t think my strategy is foolproof. For example, what if the bias was small like for example the biased coin has a 51% of being heads in (1), then the confidence in my strategy isn’t great.

How do i get kafka to delete logs based on retention period provided

I have a kafka cluster or 3 brokers running on AWS EC2 instances. However the disk spaces on these machines quickly run out because of logs being generated by Kafka, and once the machines runs out of disk space, Kafka stops running. I have the following for cleanup policies in my server.properties file

log.retention.hours=24
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000

The log.retention.hours was initially set to 168 as usual, but i thought maybe that was a long enough time for the disk to get full as I am running this on t2.small machines with 30GiB of disk storage, so i reduced this the retention hours to 24 and yet the disk still gets full.

I ran a describe on the topics to see if there is a larger retention policy on the topics, i thought maybe that was overwriting the server.properties file but the topic retention period is 1000ms as shown below after running kafka-topics.sh --zookeeper localhost:2181 --topic create-user --describe

enter image description here

Also in my server.properties file, my directory setting is log.dirs=/tmp/kafka-logs and i can see it’s created some log files in that directory, however I am also see some logfiles in the directory kafka is installed in (~/kafka-dir) and there logs direcotory which grows. I have to log onto the machine to delete every single file from this directory (~/kafka-dir/logs) ever couple days before the disk runs out of space and this is getting very inconvenient.

How do i get kafka to delete the logs.