I will design clean minimal business card in one day for $4

I will design clean minimal business card in one day

Simple Professional business card goes for the simplicity look and using simple and yet attractive ways positioning to make the business card stands out. This design is a must have if you wish to stands out from the rest of the business card.This business card comes with a 4 different colors which caters to different preferences.

This Business Card Template is well organized. texts, and colors are fully editable. You can edit them quick and easy. All AI & EPS files are very well organized and Grouped by name. And use this design file for next time.

Creative & Unique Business Card

Very easy to use and customize and Ready for print.

FEATURES:

> Fully editable Template.

> Vector EPS and AI

> Photoshop PSD File Mockup

> CMYK color mode.

> 3.5×2 inch dimension.

> 0.25 Bleed.

> Highly Organized (Labeled & grouped)

If you have any problems please contract with me

  • Print Ready Format
  • Font Used: Bebas Neue, Arial, UTM Avo,…

.(tagsToTranslate)namecard(t)businesscard(t)cardvisit

Example of production grade clean / onion architecture

I am looking for examples of open source applications implementing clean / onion architecture. All examples I’m finding are simply examples and not real world applications.

Preferred languages / frameworks : Php/Symfony, Angular, Java, Flutter

Thanks

Clean MacOS system (clean terminal)

I hope everyone is safe.

I tried to install an Apache version using brew yesterday on my MacOS Catalina, but there were conflicts with the Apache version of the system. After a few tries, I deleted files that I shouldn't have deleted. Is there a way to reset or reinstall or clean up the terminal, binary files and the system to have the factory settings? I have already searched a lot but nothing has worked for me.

Thanks in advance.

clean code – How to manage the validation of entries in microservices for duplicate data

What would be the best practice to manage the validation of entries in the microservice? Especially for duplicate data?

To give a context, let's say I have 3 services:

User

Typical user service with User object with a lot of details (~ 40 fields in the object)

Asset

AT Asset object like

{
  id,
  name,
  companyId,
  descripton
}

New

The news service has a Feed object that references both User and Asset but for News, he only takes care of the subset User and Asset (i.e. only need 10 of User field)

{
  id,
  title,
  description,
  asset,
  user

I am aware of the concept that News should have its own vision of the user and resources and data consistency can be achieved via the message hub. So the question now becomes if I have a request to publish a new http stream so that the body of the request looks something like

{
  "title": "title",
  "description":"description",
  "asset": {
    "id": "asset1",
    "name": "phone",
    "companyId": "company-1"
  },
  "user": {
    "id: "user-1",
    "name": "name",
    "comapnyId": "company-1",
    "roles": ("reporter")
  }
}

How do I perform entry validation for users and assets in the News service? Suppose the input validation for the user looks like:

  • user role cannot be a mix of journalist and manager (one or the other)
  • name <120 characters
  • companyId must be null if the user role is a client, otherwise it must be defined

So should I make sure that this entry validation is in both the News service and the User service? Using a common / shared object with validation seems simpler but I always thought it was some kind of antipattern or am I missing something here?

PS – I tried to avoid a direct service-to-service call (the client also cannot make multiple requests due to limited network bandwidth), so storing assetId and userId in the stream will not work

PSS – the backend code is written in Go so OOP approach may not best

Clean architecture and state model

I work in an application that would have 3 states:

  • Offline: The data displayed in the graphical interface is deactivated for modification.
  • Hidden: The data displayed in the GUI can be changed and saved only in
    local files on demand (ie when the user clicks on the "save" button).
  • Online: The data displayed in the graphical interface can be modified and it is automatically saved in the cloud once a change occurs. Optionally, the user can save the changes to the local files (i.e. when the user clicks on the "save" button).

I implement Clean architecture plus MVP to isolate the GUI from bussine rules. Also to manage the states I plan to use State model and here is where my question comes from:

  • Since I would be using MVP and it seems that the states listed above only affect GUI elements, the state model would only apply to "Presenter (s) objects ) "?

mediawiki – Is there a tool to clean up Wikipedia articles where there are duplicate references?

Sometimes when editing Wikipedia, I find an article where the references have been duplicated.

Is there a tool that makes it easy to merge all of these references to the same URL into one reference?

Here is an example page:
https://en.wikipedia.org/w/index.php?title=David_Green_(social_entrepreneur)&oldid=936659277

You can see in the example that the same schwab foundation page has multiple references.

entity – Why can't entities leave the inner layers of clean architecture?

I was reading on clean architecture, and I don't understand this part:

Do not use your Entity objects as data structures to circulate them in the outer layers. Create separate data model objects for this.

As long as I don't let the entities escape via an API – that's why I create presenters -, what's wrong with letting the entities leave the internal layer of the domain ? Why create DTOs for this?

The only thing I should avoid (in the onion model) are the outward pointing dependencies .. but the case that I presented points inward ( API depending on the Entities).

clean code – C # | How to test a unit timer (in a daily job planner)?

I wrote this course:

using System;
using System.Threading;
using System.Threading.Tasks;

    public interface IJobScheduler {
        void RunDaily(Task task, int hour, int minutes );
    }

    public class JobScheduler : IJobScheduler, IDisposable
    {
        Timer timer;

        /// 
        /// Run the provided task every day at the defined time.
        /// 
        /// Action to execute
        /// At what time (hour) the task have to be executed. LOCAL time.
        /// At what time (minutes) the task have to be executed. LOCAL time.
        public void RunDaily(Task task, int hour, int minutes)
        {
            var todayRun = DateTime.Today.Add(new TimeSpan(hour, minutes, 0));

            var timeToGo = todayRun > DateTime.Now ?
                (todayRun - DateTime.Now) :          // run today
                todayRun.AddDays(1) - DateTime.Now;  // run tomorrow

            timer = new Timer(x => { if (isEnabled) task.RunSynchronously(); },
                state: null,
                dueTime: (int)timeToGo.TotalMilliseconds,
                period: 24 * 60 * 60 * 1000 /* 24h */);
        }

        public void Dispose()
        {
            try { timer?.Dispose(); } catch { }
        }
    }

Now here is the initial test that I wrote:

        (Test, Category("long_test"))
        public void TaskScheduler_RunDaily__should__execute_at_the_specified_time() {

            // scheduler has a precision of 1 minute so...
            var runAt = DateTime.Now.AddMinutes(1);

            var runCounter = 0;
            Task task = new Task(
                () => runCounter++
                );

            int hours = runAt.Hour;
            int minutes = runAt.Minute;
            var scheduler = new JobScheduler();
            scheduler.RunDaily(task, hours, minutes);

            // scheduler has a precision of 1 minute so...
            Thread.Sleep(TimeSpan.FromSeconds(60+2));

            runCounter.Should().Be(1);
        }

It can be improved to exit as soon as the task is executed or, by introducing seconds as a parameter (with zero by default) I may be able to reduce the test time to 1 or 2 seconds.

But my real question is:
How can I verify that the programmer has a timer set to 24 hours (= it will execute the task after 24 hours)?

I started thinking about checking the internal timer …

        (Test)
        public void TaskScheduler_RunDaily__should__have_an_internal_timer_et_to_24_hours()
        {
            // scheduler has precision of 1 minute so...
            var runAt = DateTime.Now.AddMinutes(1);

            var runCounter = 0;
            Task task = new Task(
                () => runCounter++
                );

            int hours = runAt.Hour;
            int minutes = runAt.Minute;
            var scheduler = new JobScheduler();
            scheduler.RunDaily(task, hours, minutes);

            // use reflection to get the check internal Timer
            var timerField = typeof(JobScheduler).GetField("timer",
                System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Instance);

            if (timerField == null) Assert.Fail("Cannot read the Timer field");

            var timer = timerField.As();

            // what can I test here ?  
            timer. Should().Be(true);           
            timer. Should().Be(24_HOURS);
        }

and I think it is okay to use reflection and check an internal implementation of MY code, but it is not acceptable to check the internal implementation of Timer himself.

The real behavior to test here is the fact that the task runs after 24 hours.
I can think of "exposing" these 24 hours so to make fun of me but I really don't like the idea: DailyRun means 24 hours, why should I expose this value as a parameter.
The other type of rule I like to follow is: "Don't make the code ugly just because it has to be testable, prefer the simplicity."
To be clear, the RunDaily The method is 10 lines of code and it only gets the necessary input, I don't want to change it because there is no practical way to test it as it is.

python – Interactive command line interface – Download and clean up json API data

I am working on creating an interactive command line interface with many moving parts and I am trying many new things for the first time. It currently works, but I fear that the code base will be difficult to maintain in the future and that its design model is flawed as I am inexperienced. Everything therefore works as follows: REPO to follow, API_docs:

api_scraper.py

This is the basic module to retrieve the data from the API itself. The solution is assigned to @Grismar see the solution and why. I have not been able to solve it and I have not yet fully understood the solution, very complicated.

get_id.py

This module is built directly on api_scraper.py to retrieve the identifiers of leagues / seasons / teams which are saved in json format for later use so that you do not need to call the API every time an identifier is required.

check_api.py

This module aims to validate that the leagues and seasons are recovered from get_id.py has information and that it is not empty. If a league is empty, it is deleted from the json file.

get_stats.py

This is a basic module and is built directly on api_scraper.py to retrieve player stats / fixed stats / team leaderboards / teams. The class SeasonStats takes two arguments per league and one season in the form SeasonStats('EN_PR', '2019/2020'). All methods return json files saved in the raw_data phone book. But since each instance of SeasonStats initiates several types of identifiers which result in many API calls, it is quite slow.
Since there are a lot of leagues and all the leagues have several seasons, I decided to create a marinated file in the form of a dict where each value is an instance of a league and one season. Ex {'EN_PR_2019/2020':SeasonStats('EN_PR' '2019/2020')}. The total time to create the pickle is approximately 30 minutes depending on the connection.

clean_stats.py

Contains functions to clean up raw data. Since the raw data is presented in a nested json format, it is difficult to use directly and therefore flattened, so there is a value for each key.

This is the interactive command line interface. It is built with modules cmd and docopt. It uses the marinated file, get_stats.py and clean_stats. I somewhat failed to create an abstract class in the sense that when an argument is passed to an authorized command, the argument is checked against a key in a dict that contains the requested function. As a result, for each new feature added, the dict must be changed. The main functions are now to view the leagues and it's the season in the record season_params.json to see which leagues and seasons are supported. download which downloads a type of statistics for a league and a season, and clean which flattens the raw data.

directory.py

Help module for loading / saving / writing json files.

I am looking for all kinds of contributions that you are ready to provide to improve this project. My biggest concerns are how I can bridge the gap between retrieving the identifier and retrieving the statistics currently held by the stripped file that is required to retrieve API statistics at this time. Otherwise:

  • Appropriate project structure
  • Current implementation error
  • Design errors / improvements
  • Any other feedback, neither small nor large.

Thank you very much for reading and taking your time to review. This is my first "appropriate" project in Python and so far I feel really good about it. The end goal is to calculate the odds and find the value of the odds.

nodes – I have big gaps in the PIDs on my site. Should i clean it?

So, since I have a little time left, I do deferred maintenance on my personal site. You know, delete the modules I no longer need, clean up the taxonomy, etc., etc. There is a technical debt on this site because well … you know what they say about children and shoemaker shoes. When I decide it's time to move the site to D9, I want to ease the transition. The site has been around for a while. This was originally a Drupal 4.7 site which has been upgraded to 5, 6, 7 is currently a D8 site.

So enough preamble, back in the Drupal site 5 days, I had 4 Drupal sites that I merged into this site. Since there was no MigrateAPI 12 years ago, I renumbered the NIDs within the source sites, then copied the records from the database to the site target. In case I need to debug things, I left gaps between each of the modes of the site (I added 2000 to the NIDs of siteA, 4000 to the NIDs of siteB, etc.) so i can easily tell where the knots came from.

It works and the merged site has been evolving for more than a decade. I have come to a point where the system has about 4,000 nodes, but the highest PID is approaching 11,000. I don't see any risk here, I've been working on much larger sites. I won't risk exceeding the maximum number of PIDs anytime soon.

My question is, is there an advantage to compressing NIDs (probably by migration to a new Drupal installation)? I don't see it, but I may be missing something that someone else can see.

Thank you.