Tools for managing DNS domain zone settings: what to choose | NewProxyLists

Tools for managing DNS domain zone settings: what to choose

One of the most frequent requests to technical support is a question about tools for configuring a DNS domain zone. And now, we are going to tell you about the types of instruments for managing domain zones settings:

If you have additional questions or need our help in solving any tasks, write to us at or call +380 44 583-5-583. We are ready to help you 24×7.

ms access 2010 – Managing and updating data from different tables with similar columns, centrally through a single form

I have year wise tables of mails in MS Access 2010.

Tables are having fields :

Mail ID, YYMM(Year and Month), Reference No, Subject
, Reference Date, Needs Action(Y/N), Assigned to, Status(Open/Close), Remarks. 

“Mail ID” and “YYMM” are concatenated in the format YYMM-XXXX, for example 2101-7000, by query.

Each table is having around 7000 records and around 50% records are of nature “Needs Action”.

I need to regularly monitor and update the status of “Needs Action” mails. How can I merge all the mails from all tables into one table or query and update them through a form in there respective year table.

I merged the tables with sql union query but it is not allowing to update the status. Please advise how can I do this.

Controlling and managing separate git repos on Azure DevOps for code security

My company is about to onboard some junior devs for the first time, and we want to limit their access to just the presentation layer. As it stands, everything is in one Git repo. The current plan is to fork the main repo, and modify it such that only presentation layer and the compiled assembles will be left. The new devs can then branch from there. The lead dev can just do the merge himself.

Is there a better and optimal way to approach this?

What softwares are used for managing a domain registry?

I want to know what are softwares used by top domain managers to keep the names registry.

I am asking this because I am in a bid of proposing top level registration service for my country which has a lot of problems with that.

Thank you and best wishes for the new year.

dungeons and dragons – Managing collection

Im not sure i can ask this here, but latelly im expanding my personal Dungeons and Dragons products, my own personal collection, anyone knows if there is somewhere, official or unofficial, a dungeons and dragons products list? Where i could check or uncheck as i get them?

Thank you and sorry for my english

Checklists for managing a small business?

Hi everyone,

New member here (and new to business entirely, but a long time engineering professional.) I just started on my own bootstrapped SaaS business, and I am surprised at the number of things an owner needs to do in a regular and disciplined way just to keep things running smoothly.

In my work so far, I have found check-lists extremely helpful (see username :) ) So, specifically curious if anyone here uses daily/weekly/monthly checklists, and is willing to share. And if there is a way to do this more effectively than a checklist, I’m happy to hear suggestions and change my habits!

If it helps others, here’s what I have discovered so far:

– Read at least one article about SaaS growth techniques
– Check money in bank and balance the “books”
– Talk to at least one potential partner/send email
– Update goals for next month

– Spend a few minutes talking to at least one person in (my target customer.)
– Write at least one blog article
– Back up database
– Create tasks for next week
– Find at least one thing to celebrate!

In the spirit of paying it forward, I will volunteer to collate everyone’s suggestions in this thread. Again, thank you everyone for their help!


Managing Your LowEndEmpire With Ansible, Part 2

In the previous tutorial, we walked through setting up Ansible on both the control (master) node and on target nodes.  Now let’s look at using Ansible capabilities.

One cool thing about Ansible is that hosts can have various roles, and you can layer these roles. So for example you could have roles like this:

  • a “common” role that contains tasks to be run on all hosts
  • a “backup-client” role for hosts that are backed up
  • a “db” role for hosts that operate as database servers

etc. In this tutorial, we’ll have a “common” role that we intend to run on all hosts. That we’ll create another role called ‘db’ for database servers.

In /ansible, create the following playbook file called db.yml:

- hosts: db

    - common

Note that we are using YAML, which is very fussy about syntax, paticular spaces, so if you get an error, make sure it’s as show above.

Ansible comes with a wide variety of modules, which are packages that can be used to issue commands on target hosts. These cover a ton of common tasks. Some examples:

  • the ‘apt’ module can be used to install and remove packages using apt on Debian. There are also ‘yum’, ‘pacman’ and other package manager modules.
  • modules such as ‘user’, ‘group’, etc. can add/remove users and groups
  • locale_gen, timezone, and other modules can be used for system configuration
  • the ‘postgresql*’ set of modules and the ‘mysql*’ modules can be used to add/remove databases, add/remove users, etc.

Etcetera. So if you want to make sure that a certain package is available on your server, you don’t need to code dpkg, apt-get, etc. commands. You can just use the relevant Ansible module.

If something you need is not covered by a stock module, there are also modules for modifying files, making sure a line is present in a file, etc. which allow for configurations not covered by the Ansible distributed modules. And of course you can always copy a script from your control node and execute it.

See the docs for a full list of all modules and capabilities.

Create the following directory structure:

mkdir -p /ansible/roles/common/tasks

Now we’ll create the actual tasks we’re going to execute for the ‘common’ role. In /ansible/roles/common/tasks, create the file main.yml as follows. You do not need to include the comments – they are explanatory text for purposes of this tutorial.

The ‘name’ portion of each task is whatever you choose to call that task. The following line contains the module name followed by a colon, and then specific arguments and options for that module.


  # this task will run dpkg-reconfigure locales and ensure that
  # en_US.UTF-8 is present. Modify for your locale.

  - name: locale generation
    locale_gen: name=en_US.UTF-8 state=present

  # this task will run apt-get update

  - name: apt-get update
    apt: update_cache=yes

  # this task will run apt-get upgrade

  - name: apt-get upgrade
    apt: upgrade=dist

  # this task will install some packages we want on all hosts

  - name: basic packages
    apt: name=bsd-mailx,bzip2,cron,dnsutils,gpg,git,man,sqlite,unzip,vim,wget,whois,zip state=latest

  # this task will set the localtime to US/Pacific. Modify to taste.

  - name: set timezone
      name: US/Pacific

  # this task will make sure that cron is enable in systemd

  - name: cron enable
    service: name=cron enabled=yes state=started

  # this task ensures that root's .bash_profile exists
  - name: make sure /root/.bash_profile exists
    path: /root/.bash_profile
    state: touch

  # this task ensures that 'set -o vi' is in root's .bash_profile

  - name: set -o vi for root
    path: /root/.bash_profile
    state: present
    regexp: '^set -o vi'
    line: set -o vi

Then run the ansible-playbook file against db.yml:

root@master:/ansible# ansible-playbook db.yml

PLAY (db) **********************************************************************

TASK (Gathering Facts) *********************************************************
ok: (

TASK (common : locale generation) **********************************************
ok: (

TASK (common : apt-get update) *************************************************
changed: (

TASK (common : apt-get upgrade) ************************************************
ok: (

TASK (common : basic packages) *************************************************
ok: (

TASK (common : set timezone to US/Pacific) *************************************
ok: (

TASK (common : cron enable) ****************************************************
ok: (

TASK (common : make sure /root/.bash_profile exists) ***************************
changed: (

TASK (common : set -o vi for root) *********************************************
changed: (

PLAY RECAP ********************************************************************* : ok=9 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0

Now you may say to yourself “so you’ve set the locale and timezone and installed some packages – so what?” But the key points here are:

  • You didn’t have to do it manually.
  • Because it’s automated, it’s done identically every single time.
  • You can do this as easily on one host as on a thousand hosts.
  • Ansible allows you to set roles that baseline your environment. So every single host you use will be setup exactly as you wish.
  • You can run these commands every night to make sure no host drifts from the configuration you want. Think of these as “policies” that enforce how you want each server to be configured.

Modify your db.yml to add a db-server role:

- hosts: db

    - common
    - db-server


mkdir -p /ansible/roles/db-server/tasks

And create a main.yml there with these tasks:


  # make sure that postgres is installed

  - name: postgres package
    apt: name=postgresql-11,python-ipaddress state=latest

  # make sure postgresql is configured to start on boot

  - name: postgres enable
    service: name=postgresql enabled=yes state=started

  # enable md5 (password) connections locally

    - name: modify pg_hba.conf to allow md5 connections
        dest: /etc/postgresql/11/main/pg_hba.conf
        contype: local
        users: all
        databases: all
        method: md5
        state: present

Now run

ansible-playbook db.yml

You’ll see Ansible walk through all the ‘common’ tasks, and then:

TASK (db-server : postgres packages) *******************************************
changed: (

TASK (db-server : postgres enable) *********************************************
ok: (

TASK (db-server : modify pg_hba.conf to allow md5 connections) *****************
changed: (

Other PostgreSQL modules allow us to create databases, setup users, grant permissions, etc.

Ansible allows the easy distribution of files, either for configuration or application purposes. It also comes with a powerful templating package called Jinja2 that can modify these templates so that they are customized properly for each system.

Let’s add a mail-server role. We’ll use postfix.

Modify db.yml again:

- hosts: db

    - common
    - db-server
    - mail-server

Then execute:

mkdir -p /ansible/roles/mail-server/tasks

And edit /ansible/roles/mail-server/tasks/main.yml:


  # ensure postfix packages are installed

  - name: postfix package package
    apt: name=postfix state=latest

  # template /etc/mailname

  - name: /etc/mailname
    template: src=/ansible/src/mailname.j2 dest=/etc/mailname owner=root group=0 mode=0644

  # copy /etc/postfix/

  - name: postfix
    template: src=/ansible/src/ dest=/etc/postfix/ owner=root group=0 mode=0644

  # make sure postfix is configured to start on boot and restart it
  # in case is changed

  - name: postfix enable and restart
    service: name=postfix enabled=yes state=restarted

  # set the root: alias in /etc/aliases

  - name: root alias in /etc/aliases
      path: /etc/aliases
      state: present
      regexp: '^root:'
      line: 'root:'

  # run newaliases

  - name: newaliases
    command: /usr/bin/newaliases

Now let’s create our templates. In /ansible/src/mailname.j2, enter this text:

{{ ansible_host }}

This is a Jinja2 template (hence the .j2 ending). Text in between the double braces will be replaced with variables. Ansible supports many different variables. In this case we are using ‘ansible_host’ which will be replaced with ‘’.

Here is /ansible/src/, our postfix configuration:

smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU)
biff = no
append_dot_mydomain = no
readme_directory = no
compatibility_level = 2
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination
myhostname = {{ ansible_node_name }}
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
myorigin = /etc/mailname
mydestination = $myhostname, , localhost
relayhost =
mynetworks = (::ffff: (::1)/128
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = all
inet_protocols = all

Of course many Postfix variables could be set – this is a tutorial on Ansible not Postfix.

Now run:

ansible-playbook db.yml

And after the ‘common’ and ‘db-server’ sections, you’ll see the ‘mail-server’ tasks run:

TASK (mail-server : postfix package package) ***********************************
changed: (

TASK (mail-server : /etc/mailname) *********************************************
changed: (

TASK (mail-server : postfix *******************************************
changed: (

TASK (mail-server : postfix enable and restart) ********************************
changed: (

TASK (mail-server : root alias in /etc/aliases) ********************************
changed: (

TASK (mail-server : newaliases) ************************************************
changed: (

And if we look on the system, we see that the proper template substitutions have been made:

root@master:/ansible# cat /etc/mailname

Although we’ve only scratched the surface of Ansible’s capabilities, hopefully you can see the tremendous power and flexibility of the product.  With Ansible playbooks, you can both setup new systems quickly and enforce policies on a continuous basis.



I’m Andrew, techno polymath and long-time LowEndTalk community Moderator. My technical interests include all things Unix, perl, python, shell scripting, and relational database systems. I enjoy writing technical articles here on LowEndBox to help people get more out of their VPSes.

gui design – Managing conflicts with Sketch and Abstract

The team and I have recently started using Sketch and Abstract, and while for the most part we can see the value in it, there’s some confusion around how it’s handling conflicts. I should say that it’s possible this is all completely normal, but we just can’t find an answer anywhere.

What we have noticed is when merging back to the master, we get a conflict screen that shows changes to symbols that no one has touched. Say, for example, there’s a series of symbols dedicated to forms. Team A hasn’t touched them (not that they know of) and team B hasn’t touched them (not that they know of again), yet they’re flagged up as conflicts.

When comparing the two between each other in the conflict screen, there is zero difference.

Has anyone experienced this at all? Would appreciate some insight.

vpn – Managing Azure IP filtering in the remote working era

I work for a company which, as most companies around the world, is forced by Government regulations to do remote working. We have an office downtown with a pretty-less-than-adequate fiber connection to the network.

We have all environments on MS Azure. In the office-working era, Azure firewall was configured so that services which did not require public internet access, e.g. SQL databases and key vaults, were accessible only by the fixed IP of the office, which worked great.

One could set up a development environment pointing to dev instance of SQL Azure and authenticating using SQL authentication (see note 1). SSMS uses AAD identity directly. All connections came from office, so Azure is configured with defense-in-depth in mind.

Now that developers work from home, and exit with dynamic IPs, troubles have started to come. What we are doing since a few months, being the dev team made of 2 people (long story in note 2), is to manually enter IP addresses in Azure Portal as displayed by in the morning.

In the middle-term, we want to expand our SW development team and the “change-your-ip-address-in-the-morning” policy is not good at scaling and requires too many people accessing Azure Portal with proper permissions to set firewalls.

Here is the options I have considered and why they are not applicable or did not work

Solution 1: change your IP address in the morning

Not scalable. As the dev team grows, confusion can be made on the whitelists of IP addresses to enable in Azure firewall. The dev needs to enable their own IP address in both SQL instances and Key Vaults for the project they need to work, and it is realistic that they may need to access multiple projects during the same day.

Solution 2: VPN into office

Not currently practical. At the time the office was full of people, the bandwidth usage was high enough for some to complain. VPNing, and routing all traffic via VPN, requires double the bandwidth.

It’s not easy, cheap and timely to buy additional transfer capacity. I mean, the problem may not be price but the technical availability of additional Gbps: you can’t just call your business ISP and ask for an additional Gbps to be enabled overnight.

>>>> Solution 3: Azure VPN <<<<

This is my theoretical preference, and is the subject of the question. The developer would just need to VPN into Azure every time they need to run the application, so that connections to Azure resources come from a well-known whitelisted network.

This is very ideal, but unfortunately when we configure, the traffic is routed via public internet (i.e. via home link) and then the exiting IP is not whitelisted.

I have examined the routing tables of a client connected to Azure VPN and in fact only the private subnet is routed via the VPN tunnel. This because the SQL host resolves to a public IP address.

Solution 4: RD-development

Where RD stands for “remote desktop”. IMO this is a suicide because even clicking on widgets is fully affected by the employee’s home bandwidth and not everyone can use a fiber channel home (FTTH). The only advantage is that the RD machines would be already in a known whitelisted network. I know that a lot of companies use this, but we are talking about “work from home” in a country where, despite the good intentions of an employer to pay internet to employees, fibers are just unavailable at the employee’s location.

To be more clear, one thing is to run SQL queries against a cloud SQL Server, another thing is to have to transfer amounts of data to render the 1920×1080 window of Visual Studio displaying the value of a particular record during a debug session (included stepping keystrokes).

How can we fix this situation? We can’t just disable IP whitelisting.

I think, according to solution 3, that we should somehow tell the VPN to route the Azure traffic via itself. I don’t know if it’s possible or not, but the VPN would need to know Azure IPs in advance.

I am confused about this topic. I’d like to ask if other companies have found an elegant solution to this kind of problem.

(Note 1): actually, the connection string is protected in Azure Key Vault which is configured for AAD identity, so developers don’t share the connection string for their EF Core applications

(Note 2): we currently hire consultants from an external factory that VPN into their office (solution 2), but we intend to hire internal developers in the future

software – Managing kanban workflow with Gantt charts and tasks

So I’m working on an undergraduate project and I decided to choose kanban methodology. I am arranging the tasks in Gantt chart according to SDLC phases like plan, design, development, testing. How can I display in such a way that it can reflect my Kanban board?