type='text/javascript'/>

check if remote directory exists via ssh

I use python libraries to check if dir exists or not during the automation to make sure my script will create the dir i want is not present locally. But what if i want to do the same thing remotely.

I found few posts showing the same stuff with paramiko's sftp and other ways. I'm ok with paramiko but i have to install it via pip install, i can't simple use paramiko library with out install like any other standard library. So i liked ssh way.

I use 'test' command remotely, let's see 

if (sudo ssh -i /home/downloads/server.pem ubuntu@1.1.1.1 -o "StrictHostKeyChecking no"  'test -d /home/ubuntu/test_dir');                     
then 
    echo 'yes' 
fi
which is working fine. 

Automate SCP in python to retry



Here is a small script to retry scp in python, i added few exceptions. You can use them or replace all of them with single generic exception.

#!/usr/bin/python
def scp_Retry(self, cmd):
    while True:
        try:
            run = Popen(cmd, shell=True, stdout=PIPE, stderr=PIPE)
            logger.info('Executing: '+cmd)
            out, err = run.communicate()
            print(out,err)
            logger.info('SCP StdOut: '+out)
            if err == "":
                print('copied*****')
                break            elif "Operation timed out" in err:
                raise RuntimeError('Connection timed out')
            elif "Network is unreachable" in err:
                raise ValueError('Network Error, con connectivity')
            elif "Permission denied (publickey,password,keyboard-interactive)" in err:
                raise BaseException('Ssh is not ready yet ...')
        except RuntimeError as e:
            logger.error("SCP StdErr: " + err)
        except ValueError as e:
            logger.error("SCP StdErr: " + err)
        except BaseException as e:
            logger.error("SCP StdErr: " + err)
        print('retrying')
        time.sleep(5)
    return True
scp_cmd = 'scp -B -o "StrictHostKeyChecking no" -o LogLevel=ERROR -i /Users/gil/.ssh/server.pem  /Users/gil/Desktop/ca.crt ubuntu@104.154.92.0:/home/ubuntu/'
self.scp_Retry(core_reg_ca_scp)

Very important thing to keep in mind is "scp" options, reason is i worte the above scirpt when i was automation in aws cloud, where i create a vm and immediately scp few files, Since vm will not be ready before min or so, i'll keep retrying and get those different types of Excpetion.

First exception is for connection timed out, my script will start pushing files to newly created vm immediately after creating even before scp port is open, so i get "connection timed out" for few seconds.

Second error for internet connectivity, if you are running the script from a computer with out internet connectivity, then you'll see this.

Third one is very important, After scp port is ready and before vm is ready to validate the ssh key we are using. Here what happens is you'll be prompted for the password even you are using ssh key. VM will be in this state for couple of seconds or more.

 -B option will avoid prompting for password at above mentioned state, advantage of using this option is you'll not be blocked prompting for password, this option will times out the connection and our script will retry again and we'll succeed at later retries.

"StrictHostKeyChecking no" This will disables Host Key Checking, so that you don't have to press "yes" while connecting to new vm for the first time.

"LogLevel=ERROR": This will disable all log messages show up while ssh/scp. Reason i use is i'm using Popen.communicate, i'm capturing stdout and stderr, to check if stderr is None or not. If i don't use this option, even successful scp, warning messages will be piped into stderr and my if condition will fail even after successful scp. Below is the warning message you see , while you do scp for the first time.

example ": Warning: Permanently added 'x.x.x.x' (ED25519) to the list of known hosts.\r\n" gil...

Creating VPC in AWS Cloud

VPC let's you isolate your subnets, you can have multiple VPC depending on your requirements, As AWS documentations says, you can have your web server in a VPC which is exposed to internet and all DB servers in another VPC doesn't exposed to internet.

Reason i writing this post was i deleted my default VPC accidentally, so i want to create one which has same functionality similar to Default one, which mean instances in this VPC should be exposed to internet.

Please follow the below link from AWS to create a VPC , which have very good documentation.


Few things i just want to highlight, 

1. Create a VPC.
2. Create a subnet under newly created VPC.
3. Create a Internet Gateway and attach it to new VPC.
4. Edit the route table and add an entry to allow all traffic via Internet Gateway to the outside world.

Thing i would like to highlight and make sure you don't miss and end up wasting time in troubleshooting.


1. After creating subnet, right click on the subnet and "Modify Auto-Assign Public IP"


Enable it.


If you don't don't enable it, your instances under new VPC won't get public ipaddresses automatically.

2. Go to "Route Tables" and add entry to allow all traffic to outside world via Internet Gateway.



Unless you do this step you'll not access Instances from Outside world using public ip. What see is above route allows Instaces reach internet, if you don't add this entry, your request from outside world will reach the instance but they can't reply or acknowledge  since traffic to outside world is not allowed via Internet Gateway.

I tested this using tcpdump, i see requests on the instances from outside world but instances can't talk back.

My intention is to expose the points that was documentation didn't do.

gil ...

Python Decorators

Decorators take functions as argument and manipulates with having to modify the original function.

Here is a small example of decorators, my base function which is "run_cmd" takes any bash command as argument and executes them using subprocess.Popen which then returns two values "stdout"and "stderr".

What i want to do is with out touching "run_cmd" function i would like to add some checks like say , if "stderr" is None print as "no error" , else print "stdout", something like that.

So i'm writing a decorator function and add all these checks then i decorate my "run_cmd". Let's see how

#!/usr/bin/pythonimport subprocess


def checker(f):
    def inner(a):
        o, e = f(a)
        if e != '':
            exit(e)
        else:
            print("Output of your command is: "+o)
    return inner


@checkerdef run_cmd(value):
    p = subprocess.Popen(value, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    out, err = p.communicate()
    return out.strip('\n'), err.strip('\n')


Here if you observe i'm passing a command to "run_cmd", since it's decorated this whole function will be taken as an argument by "checker", but the arguments i pass to "run_cmd" will be passed to "inner" function that is in "checker", So there i'm creating an instance of "run_cmd" assigning the return values to "o" and "e",  Then everything is as usual , write your if conditions blah blah

See the output of above command:

run_cmd('whoami')
/usr/bin/python /Users/gil/Desktop/decorator.py
Output of your command is: gil

Process finished with exit code 0

See output of wrong command

run_cmd('whoamia')
/usr/bin/python /Users/gil/Desktop/decorator.py
/bin/sh: whoamia: command not found

Process finished with exit code 1

gil...

Where= setting doesn't match unit name. Refusing. Systems

First thing you should keep in mind while creating "mount units" is mount units must be named after the mount point directories  they control. Ex: if the mount point is "/var/storage/disk1", then mount unit name should be like "var-storage-disk1.mount". 

If you mount unit name and mount point path are different you'll hit below error.

storage.mount: Where= setting doesn't match unit name. Refusing.

so be cautions while creating mount units.

gil ...