Skip to main content

Posts

Showing posts from August, 2017

terraform iterate over string

ingress = "22:192.168.0.0/24:tcp,80:172.16.120.0/16:tcp,8080:0.0.0.0/0:tcp" egress = "22:192.168.0.0/24:tcp,80:172.16.120.0/16:tcp,8081:127.0.0.0/0:udp" resource "openstack_compute_secgroup_v2" "secgroup_1" { name = "secgroup" description = "my security group" count = "${length(split(",",var.ingress))}" rule { from_port = "${element(split(":",element(split(",",var.ingress),count.index)), 0)}" to_port = "${element(split(":",element(split(",",var.ingress),count.index)), 0)}" ip_protocol = "${element(split(":",element(split(",",var.ingress),count.index)), 2)}" cidr = "${element(split(":",element(split(",",var.ingress),count.index)), 1)}" } } Note : This will create multiple security groups if you want single security group and multiple rules use following code: resource "openstack_networ

Jenkins - Publish Over SSH Plugin: How to copy directory

Problem: I'm trying to use  Jenkins' Publish Over SSH  plugin to copy all files AND sub-directories of some given directory, but so far, I've only able to copy files and NOT directory. I have a directory named  foo  in my workspace, and during the build, I want to copy everything in this directory to a remote server. I've tried this pattern  foo/** , but it doesn't copy all sub-directories. Solution: For recursive copy of directory you should give foo/**/*

Jenkins: publish over ssh does not put files to remote server

Problem : I have strange issue on the latest Jenkins 1.634. Publish over ssh writes to log that it puts correctly file but nothing appears on remote server. e.g. I have logs SSH: cd  [var/www/data-fb-localtest] SSH: OK SSH: put  [asm.js] SSH: OK SSH: put  [asm.js.gz] SSH: OK SSH: put  [hero.data] SSH: OK SSH: put  [hero_main.js] SSH: OK SSH: cd  [/home/dev] SSH: OK SSH: cd  [var/www/data-fb-localtest/] SSH: OK SSH: put  [achievements.exm] SSH: OK SSH: put  [ai.exm] SSH: OK SSH: put  [atlas0.atlas] SSH: OK SSH: put  [atlas0.rgbz] SSH: OK but nothing appears in var/www/data-fb-localtest   Solution : I found the issue. I do not set root remote directory and in publish task use absolute path. But plugin does use not absolute path but path relative to my user's home directory

Change to jenkins user

Look at the shell specified in /etc/passwd for the jenkins user. You can do so by running something like: grep jenkins /etc/passwd The output will look similar to this: jenking:1001:1001::/usr/local/jenkins:/bin/false The last field is the login shell of the user. Here you can see it is set to /bin/false which will immediately exit. The solution is to specify which shell to use as you described: su -s /bin/bash jenkins Or modify the login shell of the jenkins user with "usermod(8)" (executed as a root user) : usermod -s /bin/bash jenkins Then  grep jenkins /etc/passwd  should now output something like: jenkins:1001:1001::/usr/local/jenkins:/bin/bash After which.  su - jenkins  will work as you expect.

unconfigured table sc hema_keyspaces cassandra

The issue you are having here, is that you are using docs to explore Cassandra that are for version 2.0 and 2.1. My guess is that you are probably using Cassandra 3.0. In that case, you will need to query the system_schema keyspace for the "keyspaces" table: cassandra @ cqlsh : system_schema > SELECT * FROM system_schema . keyspaces ;

How to unfreeze after accidentally pressing Ctrl-S in a terminal?

This feature is called Software Flow Control (XON/XOFF flow control) When one end of the data link (in this case the terminal emulator) can't receive any more data (because the buffer is full or nearing full or the user sends  C-s ) it will send an "XOFF" to tell the sending end of the data link to pause until the "XON" signal is received. What is happening under the hood is the "XOFF" is telling the TTY driver in the kernel to put the process that is sending data into a sleep state (like pausing a movie) until the TTY driver is sent an "XON" to tell the kernel to resume the process as if it were never stopped in the first place. C-s  enables terminal scroll lock. Which prevents your terminal from scrolling (By sending an "XOFF" signal to pause the output of the software). C-q  disables the scroll lock. Resuming terminal scrolling (By sending an "XON" signal to resume the output of the software). This feature is legacy (ba

Salt stack formulas:

Add all the configurations in pillar.sls into the target file: [code] {%- if salt['pillar.get']('elasticsearch:config') %} /etc/elasticsearch/elasticsearch.yml: file.managed: - source: salt://elasticsearch/files/elasticsearch.yml - user: root - template: jinja - require: - sls: elasticsearch.pkg - context: config: {{ salt['pillar.get']('elasticsearch:config', '{}') }} {%- endif %} [/code] 2. Create multiple directories if it does not exists [code] {% for dir in (data_dir, log_dir) %} {% if dir %} {{ dir }}: file.directory: - user: elasticsearch - group: elasticsearch - mode: 0700 - makedirs: True - require_in: - service: elasticsearch {% endif %} {% endfor %} [/code] 3. Retrieve a value from pillar: [code] {% set data_dir = salt['pillar.get']('elasticsearch:config:path.data') %} [/code] 4. Include a new state in existing state or add a new state: a. Create/Edit init

Yum : Operation too slow. Less than 1000 byt es/sec transferred the last 30 seconds

First thing to try is the usual yum clean all You might be running 3rd party repositories and do not have yum-plugin-priorities installed. This could compromise your system, so please install and configure  yum-plugin-priorities . You could also try the following: yum –disableplugin=fastestmirror update. minrate  This sets the low speed threshold in bytes per second. If the server is sending data slower than this for at least  timeout' seconds, Yum aborts the connection. The default is 1000′.   timeout  Number of seconds to wait for a connection before timing out. Defaults to 30 seconds. This may be too short of a time for extremely overloaded sites. You can reduce  minrate  and/or increase  timeoute . Just add/edit these parameters in  /etc/yum.conf [main]  section. For example: [main] ... minrate=1 timeout=300

Salt stack issues

The function “state.apply” is running as PID Restart salt-minion with command:  service salt-minion restart No matching sls found for ‘init’ in env ‘base’ Add top.sls file in the directory where your main sls file is present. Create the file as follows: 1 2 3 base: 'web*' : - apache If the sls is present in a subdirectory elasticsearch/init.sls then write the top.sls as: 1 2 3 base: '*' : - elasticsearch.init How to execute saltstack-formulas create file  /srv/pillar/top.sls  with content: base : ' * ' : - salt create file  /srv/pillar/salt.sls  with content: salt : master : worker_threads : 2 fileserver_backend : - roots - git gitfs_remotes : - git://github.com/saltstack-formulas/epel-formula.git - git://github.com/saltstack-formulas/git-formula.git - git://github.com/saltstack-formulas/nano-formula.git - git://github.com/saltstack-f

ElasticSearch Issues

java.lang.IllegalArgumentException: unknown setting [node.rack] please check that any required plugins are installed, or check the breaking changes documentation for removed settings Node level attributes used for allocation filtering, forced awareness or other node identification / grouping must be prefixed with  node.attr . In previous versions it was possible to specify node attributes with the  node.  prefix. All node attributes except of  node.master ,  node.data  and  node.ingest  must be moved to the new  node.attr.  namespace. Unknown setting mlockall Replace the bootstrap.mlockall with bootstrap.memory_lock Unable to lock JVM Memory: error=12, reason=Cannot allocate memory Edit:   /etc/security/limits.conf  and add the following lines elasticsearch soft memlock unlimited elasticsearch hard memlock unlimited Edit:  /usr/lib/systemd/system/elasticsearch.service  uncomment the line LimitMEMLOCK=infinity Execute the following commands: systemctl daemon-rel

ElasticSearch Issues

java.lang.IllegalArgumentException: unknown setting [node.rack] please check that any required plugins are installed, or check the breaking changes documentation for removed settings Node level attributes used for allocation filtering, forced awareness or other node identification / grouping must be prefixed with  node.attr . In previous versions it was possible to specify node attributes with the  node.  prefix. All node attributes except of  node.master ,  node.data  and  node.ingest  must be moved to the new  node.attr.  namespace. Unknown setting mlockall Replace the bootstrap.mlockall with bootstrap.memory_lock Unable to lock JVM Memory: error=12, reason=Cannot allocate memory Edit:  /etc/security/limits.conf and add the following lines elasticsearch soft memlock unlimited elasticsearch hard memlock unlimited Edit:  /usr/lib/systemd/system/elasticsearch.service uncomment the line LimitMEMLOCK=infinity Execute the following commands: systemctl daemon-reload systemctl elast