backup – Unlock remote Veracrypt container using a local keyfile

I am looking to create an encrypted container using VeraCrypt which will be located on a remove server. This server will be located on a local network and is behind a firewall with no publicly accessible ports.

This container will be used as a backup for files on my laptop.

I currently rsync the files to the server, where they live in a plaintext format.

I am looking to see if it is possible to do something along the following lines:

  1. Have the container encrypted using a keyfile, which will be located on (e.g.,) a USB key
  2. Have some script which will be run on the local machine which sshs into the remote server and opens the container using the keyfile on the USB, mounting the container to something like /mnt/secure_data
  3. Rsync the differences
  4. Unmount the /secure_data folder

This would be run every 30 minutes, for example.

The complicating factor here seems to be managing a remote container, whereas I can’t visualize a flow which allows me to perform the above operations but using a local container and then just rsyncing that, as the container will be ‘open’ while the files are being worked on, and I’m looking to be able to incrementally backup the work, not just when I’m finished for the day.

Is this infeasible? Is there a better way to achieve the goal? Thanks!

How to make remove free space and make APFS container take it up

I am on 11.1 Big Sur.
I have a partition with no disk identifier called (free space)
I would like to remove this, and then make the APFS container take up the free space.

Here is a screenshot of my GPT and Disks

enter image description here

networking – Ports exposed by docker container are shown as filtered – unable to connect

I am working on a fresh server installation of Ubuntu 20.04
I started a sample nginx by running docker run --rm -p 80:80 nginx
Port 80 appears to be open on the machine, I cant curl the nginx default page though:

$ nmap localhost
Starting Nmap 7.80 ( https://nmap.org ) at 2020-11-15 13:06 GMT
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000077s latency).
Other addresses for localhost (not scanned): ::1
Not shown: 998 closed ports
PORT   STATE SERVICE
22/tcp open  ssh
80/tcp open  http

$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:25:90:d7:xx:xx brd ff:ff:ff:ff:ff:ff
    inet 81.169.xxx.xxx/32 scope global dynamic eno1
       valid_lft 60728sec preferred_lft 60728sec
    inet6 fe80::225:90ff:xxxx:xxxx/64 scope link
       valid_lft forever preferred_lft forever
3: eno2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
    link/ether 00:25:90:d7:xx:xx brd ff:ff:ff:ff:ff:ff
4: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:70:d9:xx:xx brd ff:ff:ff:ff:ff:ff
    inet6 fe80::42:70ff:xxxx:xxxx/64 scope link
       valid_lft forever preferred_lft forever
48: br-49042740d2e8: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 02:42:63:fe:xx:xx brd ff:ff:ff:ff:ff:ff
    inet6 fe80::42:63ff:xxxx:xxxx/64 scope link
       valid_lft forever preferred_lft forever
68: veth17ce2e9@if67: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master docker0 state UP group default
    link/ether d6:e2:53:0b:xx:xx brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::d4e2:53ff:xxxx:xxxx/64 scope link
       valid_lft forever preferred_lft forever


# Generated by iptables-save v1.8.4 on Sun Nov 15 13:00:57 2020
*filter
:INPUT ACCEPT (151:14142)
:FORWARD DROP (15:780)
:OUTPUT ACCEPT (123:16348)
:DOCKER - (0:0)
:DOCKER-ISOLATION-STAGE-1 - (0:0)
:DOCKER-ISOLATION-STAGE-2 - (0:0)
:DOCKER-USER - (0:0)
-A FORWARD -j DOCKER-USER
-A FORWARD -j DOCKER-ISOLATION-STAGE-1
-A FORWARD -o br-49042740d2e8 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-49042740d2e8 -j DOCKER
-A FORWARD -i br-49042740d2e8 ! -o br-49042740d2e8 -j ACCEPT
-A FORWARD -i br-49042740d2e8 -o br-49042740d2e8 -j ACCEPT
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 80 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i br-49042740d2e8 ! -o br-49042740d2e8 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2
-A DOCKER-ISOLATION-STAGE-1 -j RETURN
-A DOCKER-ISOLATION-STAGE-2 -o br-49042740d2e8 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP
-A DOCKER-ISOLATION-STAGE-2 -j RETURN
-A DOCKER-USER -j RETURN
COMMIT
# Completed on Sun Nov 15 13:00:57 2020
# Generated by iptables-save v1.8.4 on Sun Nov 15 13:00:57 2020
*nat
:PREROUTING ACCEPT (20:1254)
:INPUT ACCEPT (20:1254)
:OUTPUT ACCEPT (0:0)
:POSTROUTING ACCEPT (0:0)
:DOCKER - (0:0)
-A PREROUTING -m addrtype --dst-type LOCAL -j DOCKER
-A OUTPUT ! -d 127.0.0.0/8 -m addrtype --dst-type LOCAL -j DOCKER
-A POSTROUTING -s 172.19.0.0/16 ! -o br-49042740d2e8 -j MASQUERADE
-A POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
-A POSTROUTING -s 172.17.0.2/32 -d 172.17.0.2/32 -p tcp -m tcp --dport 80 -j MASQUERADE
-A DOCKER -i br-49042740d2e8 -j RETURN
-A DOCKER -i docker0 -j RETURN
-A DOCKER ! -i docker0 -p tcp -m tcp --dport 80 -j DNAT --to-destination 172.17.0.2:80
COMMIT
# Completed on Sun Nov 15 13:00:57 2020

From my local machine, I am unable to connect to the server. Ports are being shown as filtered:

$ nmap example.de -Pn
Starting Nmap 7.80 ( https://nmap.org ) at 2020-11-15 14:12 CET
Nmap scan report for example.de (81.169.xxx.xxx)
Host is up (0.037s latency).
rDNS record for 81.169.xxx.xxx: h290xxxx.stratoserver.net
Not shown: 994 closed ports
PORT     STATE    SERVICE
22/tcp   open     ssh
80/tcp   filtered http
135/tcp  filtered msrpc
139/tcp  filtered netbios-ssn
445/tcp  filtered microsoft-ds
9876/tcp filtered sd

Nmap done: 1 IP address (1 host up) scanned in 2.67 seconds

Running the container in network mode host works as expected and I can access the nginx default page via localhost and on my local machine.
docker run --rm --network host nginx

Why is the exposing of the ports not working as expected?
How can I fix this / analyze the problem further?

proxy – making a host port accessible from within a container in Kubernetes

We have set up a Kubernetes cluster on a set of (virtual) Linux hosts. Each host runs an internal HTTP proxy on host’s 127.0.0.1:3128. To access external HTTP/HTTPS resources from this network, the proxy must be used.

We would like to make external network accessible from within containers in the cluster. Containers running on a given host should use a HTTP(S) proxy instance from that host.

How can we make these proxy services available from within containers?

c# – In-proc event dispatching through IoC container

Here is the sender and handler interfaces:

public interface ISender
{
    Task SendAsync(object e);
}

public interface IHandler<in TEvent>
{
    Task HandleAsync(TEvent e);
}

So I register in IoC container a sender service implementation, which dispatches events to all the compatible IHandler<in T> implementations. I use Autofac with a contravariance source, but there could be something else:

(Service)
public class Sender : ISender
{
    public Sender(IServiceProvider provider) => Provider = provider;
    IServiceProvider Provider { get; }

    public async Task SendAsync(object e)
    {
        var eventType = e.GetType();
        var handlerType = typeof(IHandler<>).MakeGenericType(eventType);
        var handlerListType = typeof(IEnumerable<>).MakeGenericType(handlerType);
        var method = handlerType.GetMethod("HandleAsync", new() { eventType });
        var handlers = ((IEnumerable)Provider.GetService(handlerListType)).OfType<object>();
        await Task.WhenAll(
            handlers.Select(h =>
                Task.Run(() => (Task)method.Invoke(h, new() { e }))
                    .ContinueWith(_ => { })));
    }
}

amazon web services – port mapping didn’t happen for a container deployed on AWS ECS(uses EC2)

Context:

I am using Circle CI’s aws-ecs/deploy-service-update orb to deploy my docker container by pulling the latest image in AWS ECR and deploy it in AWS ECS with AWS EC2 instance. This container is a Machine Learning model that accepts API requests at TCP port 3000(I am using fastAPI for this) and returns the predictions. After I deployed it I couldn’t send requests to the public IP of the container instance of the task that deploys the container at port 3000 (This IP is not my EC2 instance’s public IP; it only has private IP and public IP is disable).

Debugging

  1. I checked my security group and made sure that the port 3000 is open to receive requests from all IPs(0.0.0.0), as part of the inbound rule.
  2. I stopped the task(which automatically will stop the container running in the EC2 instance) with the thought that something may have gone wrong from Circle CI. Then, according to the service configuration(1 desired task) and task definition of AWS ECS, a new task has started(hence the container) automatically. But, I couldn’t send requests to this either.
  3. I SSHed into my EC2 instance to know if the port 3000 is open. This is when is when I learned that ports weren’t mapped at all:
    enter image description here
    As you can see, PORTS column is empty for the container and the container has to accept requests at port 3000 from the command.

And here are the open ports of the EC2 instance:
enter image description here
As you can see, port 3000 is not listed here.


Here is the task with port mappings which deployed the container (to AWS ECS) that you see docker ps screenshot above:
enter image description here
In the task definition, you can see the port mappings I have defined for the container.


Here is the task running on my EC2 instance with the task-definition shown above and the network mode I am using is ‘awsvpc’:
enter image description here


Here’s the “Networking” tab of ENI associated with the task, and also the inbound rule of the security group associated with the EC2 instance that the task is running inside, which accepts requests on port 3000 from all IPs.
enter image description here

EDIT 1:

After I did

docker run -p 3000:3000 <my-image:my-tag>

inside the EC2 machine(by SSHing from my laptop), I could send API requests and receive proper response to the container to it’s public IP, of the cluster of AWS ECS. This means that ports are being mapped only when I run the container manually.

I had no problems with ports when I used FARGATE, when I updated the service from Circle CI or even when I manually started tasks.

So, how to automatically map ports when a task is run from AWS ECS service dashboard or from Circle CI? If I run docker container manually, I will not be able to get logs automatically from AWS Cloudwatch and will not be able to stop it from AWS ECS dashboard. Another container by AWS that is running in EC2 instance will take care of those things. It will route the logs to Cloudwatch and accepts stop the existing one and start commands to start a new container with new image stored in AWS ECR, without having to SSH everytime I would want to look at logs or start/stop containers.

What has gone wrong here, which led to ports not being mapped and How do I fix it and map ports properly, so i will be able to send API requests to my container.

docker – how to stop MySQL container from initializing at start

For testing purposes I want to build a MySQL container that has a dataset bootstrapped into it. I know that mounting is possible and that you can place sql scripts into the init directory to run the import at start, but thats not what we want. So I tried to build one myself that just copies the datafiles into the docker container.

I use the following dockerfile to build my container.

FROM mysql:8.0.21
EXPOSE 3306

ARG BUILD_ID
LABEL stage=builder
LABEL build=$BUILD_ID
ENV TZ=Europe/Amsterdam

RUN apt-get update && apt-get install -y tzdata
ADD http://server.example.com/data/datafiles_2020_11_02.tar.gz /var/lib/datafile.tar.gz
RUN tar -xvzf /var/lib/datafile.tar.gz -C /var/lib/mysql && chown -R mysql.root /var/lib/mysql
#ADD initieel/* /docker-entrypoint-initdb.d/
ADD cnf/* /etc/mysql/mysql.conf.d/

This runs succesfully so afterwards I spin up my freshly build container and during the startup something in the container will just initialize a fresh database and throw everything that I placed into /var/lib/mysql away. I checked the README that comes with the mysql container but there is nothing I can find that relates to this.

thanks in advance for taking the time to read. I will update if my post is lacking any information.

dnsmasq in Docker is not running after container gets build or started

I am trying to use dnsmasq to serve dns requests from within a docker container (based on webdevops/php-apache-dev) to resolve e.g. xyz.localhost to 127.0.0.1. This is needed for the web application doing cURL requests to the domain xyz.localhost.

Here is my Dockerfile

FROM webdevops/php-apache-dev:7.3

# Install dnsmasq
RUN apt-get update && apt-get install -y dnsmasq

# Add localhost domain to dnsmasq configuration (to be able to resolve any subdomain of localhost)
RUN echo "address=/localhost/127.0.0.1" >> /etc/dnsmasq.d/docker-localhost.conf

# Tried to run manually (without success)
#RUN service dnsmasq start || service dnsmasq restart

Now, when I open a bash in the container docker exec -it my-container bash and check if dnsmasq is running:

root@269fe0ad10df:/etc# service dnsmasq status
dnsmasq:dnsmasqd                 STOPPED   Not started

There are symlinks to /etc/init.d/dnsmasq in every runlevel (rc0.d to rc6.d) in the same way as there are to apache and the other services that are perfectly running.
When I calling /etc/rc4.d/S01dnsmasq start manually I get the following error:

(....) Starting DNS forwarder and DHCP server: dnsmasq
dnsmasq: failed to create listening socket for port 53: Address already in use
 failed!

On the other hand it is no problem to run dnsmasq by entering service dnsmasq start.

What can I try to get dnsmasq started whenever I run docker up?

I know about the extra_hosts option in docker-compose or as docker parameter, but I want a general solution to resolve any request to *.localhost to the docker container itself (127.0.0.1).

programming languages – key benefit of using container?

I’m not a guy who has computer science related background, just curious about the container technology and of course I did some googling to see what container is.

hope this is the correct site for container related question.

Here are my questions.

  1. what is the primary benefit that container tech. offer? packing the environment(libraries, configurations…etc) for codes/programs or offering great portability across different computer system?

(some guy tells me that the portability isn’t the key benefit but close to)

  1. is it doable and practical to use container provide services just like what servers can do?

  2. can containers isolate malicious codes/apps to prevent them from affecting underlying operating system?

How to make nginx docker container auto restart if it stops proxying correctly?

I mean it stops proxying for whatever reason, not crash. it happened to me today, I had to restart it manually but I lost the error log so not sure what happened, it was working fine for months before that but I lost 6h of uptime.

here’s my docker compose settings:

version: '3'
services:
  nginx:
    network_mode: host
    container_name: port-80-reverse-proxy
    image: nginx:1.15-alpine
    restart: unless-stopped
    volumes:
      - ./config/nginx:/etc/nginx/conf.d
    ports:
      - 80:80
      - 443:443
    command: /bin/sh -c 'echo started; while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g "daemon off;"'

my nginx config file is just a long list of reverse proxying like this

server {
  listen 80;
  server_name example.com;
  server_tokens off;

  location / {
    proxy_pass http://localhost:9000;
    proxy_set_header Host $host;
    proxy_redirect off;
  }
}

I know docker has HEALTHCHECK but not sure specifically what command would be best to make sure the proxying is still working.