Skip to main content

Upload artifacts to AWS S3

This document can be used when you want to upload files to AWS s3

Step-by-step guide

Execute the following steps:
  1. Install ruby with the following commands in data machine where backup will be stored
    gpg2 --keyserver hkp://keys.gnupg.net --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
    sudo \curl -L https://get.rvm.io | bash -s stable --ruby
    source /home/compose/.rvm/scripts/rvm
    rvm list known   #####This command will show available ruby versions
    You can install the version of your choice by the following command:
    rvm install ruby 2.3.0  ###Where 2.3.0 is ruby version to be installed
    You can install latest ruby version by the following command:
    rvm install ruby --latest
    Check the version of ruby installed by:
    ruby -v
  2. Check if ruby gem is present in your machine: gem -v
  3. If not present install by sudo yum install 'rubygems'
  4. Then install aws-sdk:  gem install aws-sdk
  5. Add the code as below in a file upload-to-s3.rb:
    # Note: Please replace below keys with your production settings
    # 1. access_key_id
    # 2. secret_access_key
    # 3. region
    # 4. buckets[name] is actual bucket name in S3require 'aws-sdk'
    def upload( file_name, destination, directory, bucket)
    destination_file_name = destination
    puts "Creating #{destination_file_name} file.... "
    # Zip cloudsoft persisted folder
    `tar -cvzf #{destination_file_name} #{directory}`
    puts "Created #{destination_file_name} file... "
    puts "uploading #{destination} file to aws..."
    ENV['AWS_ACCESS_KEY_ID']='Your key here'
    ENV['AWS_SECRET_ACCESS_KEY']='Your secret here'
    ENV['AWS_REGION']='Your region here'
    s3 = Aws::S3::Client.new(
    )
    File.open(destination_file_name, 'rb') do |file|
    s3.put_object(bucket: 'bucket_name', key: file_name, body: file)
    end
    #@s3 = Aws::S3::Client.new(aws_credentials)
    #@s3_bucket = @s3.buckets[bucket]
    #@s3_bucket.objects[file_name].write(file:destination_file_name)
    puts "uploaded #{destination} file to aws..."
    puts "deleting #{destination} file..."
    `rm -rf #{destination}`
    puts "deleted #{destination} file..."
    end
    def clear(nfsLoc)
    # Removing all existing .tar.zip file from folders
    nfsLoc.each_pair do |key, value|
    puts "deleting #{key} file..."
    Dir["#{key}/*.tar.gz"].each do |path|
    puts path
    `rm -rf #{path}`
    end
    puts "deleted #{key} file..."
    end
    end
    def start()
    nfsLoc = {'/backup_dir' => 'bucket_name/data'}
    nfsLoc.each_pair do |key, value|
    puts "#{key} #{value}"
    Dir.glob("#{key}/*") do |dname|
    filename = '%s.%s' % [dname, 'tar.gz']
    file = File.basename(filename)
    folderName = File.basename(dname)
    bucket = '%s/%s' % ["#{value}", folderName]
    puts "..... Uploading started for %s file to AWS S3 ....." % [file]
    t = '%s/' % dname
    puts upload(file, filename, t, bucket)
    puts "..... Uploding finished for %s file to AWS S3 ....." % [file]
    end
    end
    end
    start()
  6. After that execute the following:
    ruby upload-to-s3.rb
  7. If adding to jenkins job add the following line in pre-build script:
    source ~/.rvm/scripts/rvm

Comments

Popular posts from this blog

Saltstack and Vault integration

First install and configure vault using this tutorial: https://apassionatechie.wordpress.com/2017/03/05/hashicorp-vault/ Use the latest version of vault. Then install salt using the steps given here: https://docs.saltstack.com/en/latest/topics/installation/ If you face any issues then refer these links: https://apassionatechie.wordpress.com/2017/07/31/salt-issues/ https://apassionatechie.wordpress.com/2017/08/03/salt-stack-formulas/ Now let's integrate vault and salt so that we can access vault secrets from inside salt state. First let's add some key values into our vault. vault write secret/ssh/user1 password="abc123" Then you can check it by reading: vault read secret/ssh/user1 To allow salt to access your secrets you must firstly create a policy as follows: salt-policy.hcl [code] path "secret/*" { capabilities = ["read", "list"] } path "auth/*" { capabilities = ["read", "list","sudo",...

Salt stack issues

The function “state.apply” is running as PID Restart salt-minion with command:  service salt-minion restart No matching sls found for ‘init’ in env ‘base’ Add top.sls file in the directory where your main sls file is present. Create the file as follows: 1 2 3 base: 'web*' : - apache If the sls is present in a subdirectory elasticsearch/init.sls then write the top.sls as: 1 2 3 base: '*' : - elasticsearch.init How to execute saltstack-formulas create file  /srv/pillar/top.sls  with content: base : ' * ' : - salt create file  /srv/pillar/salt.sls  with content: salt : master : worker_threads : 2 fileserver_backend : - roots - git gitfs_remotes : - git://github.com/saltstack-formulas/epel-formula.git - git://github.com/saltstack-formulas/git-formula.git - git://github.com/saltstack-formulas/nano-formula.git - git://github.com/saltstack-f...

How to grep the output of cURL?

curl writes the output to stderr, so redirect that and also suppress the progress: curl - v -- silent https :// google . com / 2 >& 1 | grep expire The reason why  curl  writes the information to stderr is so you can do: curl <url> | someprgram  without that information clobbering the input of  someprogram It is possible to use  --stderr -  as parameter, to redirect the output from stderr (default) to stdout. With this option you also should use  --silent  to suppress the progress bar. $ curl - v -- silent https :// google . com / -- stderr - | grep expire * expire date : 2015 - 09 - 01 00 : 00 : 00 GMT