Skip to main content

Higher order infrastructure

2017-03-11 17_30_38-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png
2017-03-11 17_31_50-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png
Developer need not to worry about the underlying infrastructure, all he/she has to look into is the services running on them and the stack they write.
You do not have to worry about where your code is running. Which leads to faster rollouts, faster releases, faster deployments. Even rollbacks have become piece of cake with having docker on your infrastructure.
2017-03-11 17_35_15-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png
If there is any change in your service all you have to do is change the YAML (yet another markup language) file and you will have a completely new service in minutes.  Docker was build for scalabilty and high availability.
It is very easy to load balance your services in docker, scale up and scale down as per your requirements. 
The most basic application that is demoed by docker, is the following cat and dog polling polygot application.
2017-03-11 17_43_31-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png2017-03-11 17_43_44-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png
Each part of this application will be written and maintained by a different team. Add it will just get collaborated by docker.
2017-03-11 17_47_59-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png
The above are the components required to get the docker application up and running.
2017-03-11 17_51_39-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png
2017-03-11 17_51_54-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png
2017-03-11 17_52_45-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png
Docker swarm is a docker cluster manager that we can run our docker commands on and they will be executed on the whole cluster instead of just one machine.
The following is a docker swarm architecture:
2017-03-11 17_54_34-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png
Containers provide an elegant solution for those looking to design and deploy applications at scale. While Docker provides the actual containerizing technology, many other projects assist in developing the tools needed for appropriate bootstrapping and communication in the deployment environment.
One of the core technologies that many Docker environments rely on is service discovery. Service discovery allows an application or component to discover information about their environment and neighbors. This is usually implemented as a distributed key-value store, which can also serve as a more general location to dictate configuration details. Configuring a service discovery tool allows you to separate your runtime configuration from the actual container, which allows you to reuse the same image in a number of environments.
The basic idea behind service discovery is that any new instance of an application should be able to programmatically identify the details of its current environment. This is required in order for the new instance to be able to "plug in" to the existing application environment without manual intervention. Service discovery tools are generally implemented as a globally accessible registry that stores information about the instances or services that are currently operating. Most of the time, in order to make this configuration fault tolerant and scalable, the registry is distributed among the available hosts in the infrastructure.
While the primary purpose of service discovery platforms is to serve connection details to link components together, they can be used more generally to store any type of configuration. Many deployments leverage this ability by writing their configuration data to the discovery tool. If the containers are configured so that they know to look for these details, they can modify their behavior based on what they find.

How Does Service Discovery Work?

Each service discovery tool provides an API that components can use to set or retrieve data. Because of this, for each component, the service discovery address must either be hard-coded into the application/container itself, or provided as an option at runtime. Typically the discovery service is implemented as a key-value store accessible using standard http methods.
The way a service discovery portal works is that each service, as it comes online, registers itself with the discovery tool. It records whatever information a related component might need in order to consume the service it provides. For instance, a MySQL database may register the IP address and port where the daemon is running, and optionally the username and credentials needed to sign in.
When a consumer of that service comes online, it is able to query the service discovery registry for information at a predefined endpoint. It can then interact with the components it needs based on the information it finds. One good example of this is a load balancer. It can find every backend server that it needs to feed traffic to by querying the service discovery portal and adjusting its configuration accordingly.
This takes the configuration details out of the containers themselves. One of the benefits of this is that it makes the component containers more flexible and less bound to a specific configuration. Another benefit is that it makes it simple to make your components react to new instances of a related service, allowing dynamic reconfiguration.

What Are Some Common Service Discovery Tools?

Now that we've discussed some of the general features of service discovery tools and globally distributed key-value stores, we can mention a few of the projects that relate to these concepts.
Some of the most common service discovery tools are:
  • etcd: This tool was created by the makers of CoreOS to provide service discovery and globally distributed configuration to both containers and the host systems themselves. It implements an http API and has a command line client available on each host machine.
  • consul: This service discovery platform has many advanced features that make it stand out including configurable health checks, ACL functionality, HAProxy configuration, etc.
  • zookeeper: This example is a bit older than the previous two, providing a more mature platform at the expense of some newer features.
Some other projects that expand basic service discovery are:
  • crypt: Crypt allows components to protect the information they write using public key encryption. The components that are meant to read the data can be given the decryption key. All other parties will be unable to read the data.
  • confd: Confd is a project aimed at allowing dynamic reconfiguration of arbitrary applications based on changes in the service discovery portal. The system involves a tool to watch relevant endpoints for changes, a templating system to build new configuration files based on the information gathered, and the ability to reload affected applications.
  • vulcand: Vulcand serves as a load balancer for groups of components. It is etcd aware and modifies its configuration based on changes detected in the store.
  • marathon: While marathon is mainly a scheduler (covered later), it also implements a basic ability to reload HAProxy when changes are made to the available services it should be balancing between.
  • frontrunner: This project hooks into marathon to provide a more robust solution for updating HAProxy.
  • synapse: This project introduces an embedded HAProxy instance that can route traffic to components.
  • nerve: Nerve is used in conjunction with synapse to provide health checks for individual component instances. If the component becomes unavailable, nerve updates synapse to bring the component out of rotation.
2017-03-11 18_01_52-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player2017-03-11 18_02_10-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player2017-03-11 18_02_27-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player
The command above is used to create a consul machine droplet in digital ocean.
2017-03-11 18_05_06-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png
Use the above command to create docker swarm master which will attach to the consul.
2017-03-11 18_09_42-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png
In docker swarm you can define your strategies in a very fine grain style.
2017-03-11 18_11_51-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png
2017-03-11 18_12_38-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player2017-03-11 18_13_17-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player
2017-03-11 18_17_14-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player2017-03-11 18_17_32-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player2017-03-11 18_18_19-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player
To scale up all you have to type is docker-compose scale <your-service-name> and you are done.
auto-scaling will2017-03-11 18_28_03-GOTO2016•Higher-Download-From-YTPak.com.mp4 - VLC media player.png
Auto-scalng will need a monitoring service to be plugged in.

Comments

Popular posts from this blog

Terraform

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions. Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform is able to determine what changed and create incremental execution plans which can be applied. The infrastructure Terraform can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc. The key features of Terraform are: Infrastructure as Code : Infrastructure is described using a high-level configuration syntax. This allows a blueprint of your datacenter to be versioned and

Java 8 coding challenge: Roy and Profile Picture

Problem:  Roy wants to change his profile picture on Facebook. Now Facebook has some restriction over the dimension of picture that we can upload. Minimum dimension of the picture can be  L x L , where  L  is the length of the side of square. Now Roy has  N  photos of various dimensions. Dimension of a photo is denoted as  W x H where  W  - width of the photo and  H  - Height of the photo When any photo is uploaded following events may occur: [1] If any of the width or height is less than L, user is prompted to upload another one. Print " UPLOAD ANOTHER " in this case. [2] If width and height, both are large enough and (a) if the photo is already square then it is accepted. Print " ACCEPTED " in this case. (b) else user is prompted to crop it. Print " CROP IT " in this case. (quotes are only for clarification) Given L, N, W and H as input, print appropriate text as output. Input: First line contains  L . Second line contains  N , number of

Salt stack issues

The function “state.apply” is running as PID Restart salt-minion with command:  service salt-minion restart No matching sls found for ‘init’ in env ‘base’ Add top.sls file in the directory where your main sls file is present. Create the file as follows: 1 2 3 base: 'web*' : - apache If the sls is present in a subdirectory elasticsearch/init.sls then write the top.sls as: 1 2 3 base: '*' : - elasticsearch.init How to execute saltstack-formulas create file  /srv/pillar/top.sls  with content: base : ' * ' : - salt create file  /srv/pillar/salt.sls  with content: salt : master : worker_threads : 2 fileserver_backend : - roots - git gitfs_remotes : - git://github.com/saltstack-formulas/epel-formula.git - git://github.com/saltstack-formulas/git-formula.git - git://github.com/saltstack-formulas/nano-formula.git - git://github.com/saltstack-f