Skip to main content

Posts

Showing posts from November, 2018

gcloud commands kubernetes

gcloud config list all the configrations that have been set To see all the settings which includes the default settings gcloud config list --all app engine is platform as a service option its serverless and largely ops free compute engine is the Iaas - fully controllable down to os container engine - the cluster of machines running kubernetes and hosting containers case study- hosting a website 1. static , no ssl for this all we need is just the storage 2. ssl , cdn needs https serving , content delivering , release management etc 3. load balancing or scaling will be useful if we have a large amount of traffic coming in. Get vms 4. lot of dependencies might be we have gone for a microservice architecture. Deployment is painful .. create containers , manage clusters 5. the abouve still might be difficult to manage so next level we have heroku and engine yard on these just focus on the code nd forget the rest we will study each in detail 1. for stora

opening multiple ports tunnels ngrok in ubuntu

Location for the config yml file /home/example/.ngrok2/ngrok.yml content of config file authtoken: 4nq9771bPxe8ctg7LKr_2ClH7Y15Zqe4bWLWF9p tunnels: app-foo: addr: 80 proto: http host_header: app-foo.dev app-bar: addr: 80 proto: http host_header: app-bar.dev how to start ngrok with considering the config file: ngrok start --all

kubernetes services

service is an object just like pods, replica set or deployments that we worked with before, one of its use case it to listen to a port on the node and forward request on that port to a port on the pod. service types: 1. node port makes internal pod accessible on a port on the node 2. cluster ip service creates a virtual ip inside the cluster to enable communication between different services, such as frontend servers to the backend servers 3. load balancer How to create a service it will be just like we created deployment replicaset example: kind : Service apiVersion : v1 metadata : name : myapp-service spec : type : NodePort ports : - targetPort : 80 port : 80 nodePort : 30008 selector : app : myapp ClusterIP: A full stack application has a number of different kind of pods running, you may have a frontend pod, web server etc These all need to communicate with each other How to establish a connection between them. Since the pods can go

networking in kubernetes

let us start with single node kubernetes cluster node has an IP 192.168.1.2 if this is minikube then we talking about minikube virtual machine inside your hypervisor unlike dockers where IP address is assigned to dockers, in kubernetes IP address is assigned to Pods so when kubernetes is initially configured we create an internal private network with the address. 10.244.0.0 and all pods are attached to it. when we deploy pods they all get a separate ip address. The pods can communicate to each other using this internal ip address bt accessing this ip address for any work may not be a good idea as its subject to change when pods are recreated. Its kinda easy to understand networking on a single cluster but how does it work when there are multiple nodes

kubernetes deployment summary and commands

Rolling updates: suppose one the updates resulted in some error and we would like to roll back the changes. or may want to pause the environment, make the changes and resume so that the changes are rolled out together How do we create a deployment file everything will be same as replicaset in the yml file except that the kind now will be Deployment commands kubectl create -f deployment-ymlFile kubectl get deployments kubectl get replicaset (deployment automatically creates replicaset) kubectl get pods to see all the created objects at once kubectl get all apiVersion : apps/v1 kind : Deployment metadata : name : myapp-deployment labels : name : myapp type : front-end spec : template : metadata : name : myapp-pod labels : app : myapp type : front-end spec : containers : - name : nginx-controller image : nginx replicas : 3 selector : matchLabels : type : front-end in most of the

kubernetes controllers explanation and commands

Replication controller It helps to ensure that the specified number of pods are running at any time. It also helps us to spin new pods and nodes as an when the load / num of user increases There are 2 similar terms replication controller and replica set both sound similar but are not the same replication controller is the older tech which is being replaced by the replica set. how do we create a replication controller: for the replicationController in the spec we provide the template of the controller so copy all the yml definition of yml file of pod except apiVersion and kind and paste it under the template apiVersion : v1 kind : ReplicationController metadata : name : myapp-rc labels : name : myapp type : front-end spec : template : metadata : name : myapp-pod labels : app : myapp type : front-end spec : containers : - name : nginx-controller image : nginx replicas : 3 execute it using

Yaml in kubernetes

A kubernetes yaml file will always contain top 4 level fields (required field) apiVersion: kind: metadata: spec: ______________________________________________________________________ kinds : Pod (can be Pod , service , ReplicaSet , Deployment) metadata:     name: myapp-pod     labels:       app: myapp metadata values are in the form of the dictionary metadata can only have name and labels however, labels can have as many properties as u wish spec  is a dictionary spec:    containers: (this is an list or an array bcz pods can have multiple containers within them)        - name : nginx-container (the - indicates that this is the first item in the list)           image : nginx create the pod using the command kubectl create -f pod-definition.yml once you create the pod,, to see it use the command kubectl get pods to get the detailed description about the pods use: kubectl describe pod {name from get pods command} eg. apiVersion : v1 kind : Pod met

setting up kubernetes

we can set up in our laptop using solutions like Minikube and kubeadm. Minikube is tool used to set up a single instance of kibernetes kubeadm - is tool used to config kubernetes in multinode set-up in our local machine with minikube we can only set up single cluster server kubeadm helps us to set up multi node cluster with master and worker on separate machines

terms in kubernetes

Node - node is machine physical or virtual on which kubernetes is installed. node is a worker machine on which container will installed by kubernetes. Was also known as minions in the past. Was also known as minions in the past. What if the machine on which container is running goes down. For this, we need to have more than one nodes Cluster -   is a group of nodes set together. This way even if one node fails we have our application accessible from other. Also helps in balancing load. Who manages the clusters. failure load management etc? Ans - Master Master is another node with kubernetes installed in it and is configured as a master. Watches over the nodes in the cluster and is responsible for actual orchestration of containers in the worker nodes. When we install kubernetes, we are actually installing the following systems Components 1. Api server 2. etcd service 3. kubelet service 4. container Runtime 5. controllers 6. schedulers The API server acts as front e

example .travis .yml file

learned from udemy sudo: required #above says we need to have superuser-level permission to make this working services:   - docker #above says we need the docker cli installed so it will install a copy of docker into the running container before_install :   - docker build -t kishlayraj2/docker-react -f Dockerfile.dev .   #above says what needs to happen before we deploy our project or run our test script :   # will tell how to run the test suite   # if from any of the below commands it gets any return code other than 0 then its going to assume that the process has failed   - docker run kishlayraj2/docker-react npm run test -- --coverage   #the only problem here is that "npm run test" wait for input from the user and doesn't terminate itself automatically, and to make it stop automatically we will use -- --coverage

elastic beanstalk

elastic beanstalk is the easiest way the run single containers, production docker instances the advantage of beanstack is that it auto scales as our load increases and by default comes with a load balancer.

synchronous speech recognition google code

import io import os import time # Imports the Google Cloud client library from google.cloud import speech from google.cloud.speech import enums from google.cloud.speech import types start_time = time.time() # Instantiates a client client = speech.SpeechClient() # The name of the audio file to transcribe file_name = os.path.join( os.path.dirname(__file__), '/home/kishlay/Documents/DeepDive/pythonCloudSpeech' , 'obamaLong.flac' ) # Loads the audio into memory with io.open(file_name, 'rb' ) as audio_file: content = audio_file.read() audio = types.RecognitionAudio( content =content) config = types.RecognitionConfig( # encoding=enums.RecognitionConfig.AudioEncoding.FLAC, # sample_rate_hertz=16000, language_code = 'en-US' , enable_word_time_offsets = True ) # Detects speech in the audio file response = client.recognize(config, audio) for result in response.results: alternative = result.alternatives[ 0 ]

async google long speech recognition with polling

Async google long speech recognition with polling import io import os import time # Imports the Google Cloud client library from google.cloud import speech from google.cloud.speech import enums from google.cloud.speech import types def transcribe_gcs(gcs_uri): """Asynchronously transcribes the audio file specified by the gcs_uri.""" from google.cloud import speech from google.cloud.speech import enums from google.cloud.speech import types client = speech.SpeechClient() audio = types.RecognitionAudio( uri =gcs_uri) config = types.RecognitionConfig( # encoding=enums.RecognitionConfig.AudioEncoding.FLAC, # sample_rate_hertz=16000, language_code = 'en-US' ) operation = client.long_running_recognize(config, audio) print ( 'Waiting for operation to complete...' ) retry_count = 1000000 while retry_count and not operation.done(): retry_count -= 1 print

using ffmpeg to convert file

To convert any file using ffmpeg ffmpeg -i input.flac output.wav -i stands for input To get media information of any media file mediainfo file.flac To convert dual channel to mono channel audio using ffmpeg  ffmpeg -i mic.flac -ac 1 out2.flac -ac is audio channel 1 is converting to 1 channel ffmpeg document https://gist.github.com/protrolium/e0dbd4bb0f1a396fcb55

docker in production

filename for development Dockerfile.dev filename for prod Dockerfile Question is how to build docker with a custom file name bcz by default it searches for Dockerfile only answer: docker build -f Dockerfile.dev another problem  anytime we make any change to our code we need to build it again we need to find someway via which if we make some change in the code, it automatically gets pushed to the container without rebuilding and restarting the container Solution: Docker volume the problem was we taking the snapshot of the project and placing it inside the container so instead, now we will be creating the reference, that will point to the local machine eg. docker run -p 3000:3000 -v /app/node_modules -v $(pwd):/app <imageid> -v /app/node_modules  -> denotes that dont map it to the local machine -v $(pwd):/app -> say that map present working dir to /app in the docker writing the same code in docker-compose to simplify version:

docker compose

Communication/networking between containers so let us say we have 2 dockers running independently eg. a  node app and a Redis server there are two ways to make them communicate 1. Using the Docker CLI and setting up port etc but we have to do this every time and this is a pain so kinda not used much in the industry 2. Using docker compose The big purpose of docker compose is that we can avoid writing all CLI docker commands everytime we want to use it and start a container. Here we can start multiple Docker containers at the same time and automatically connect them with some form of networking. Helps issue multiple commands very quickly. we will see services a lot in the commands and anytime we see this, it will essentially be CONTAINER eg. version: '3' services:   redis-server:     image:'redis'   node-app:     build: .     ports:       - "4001":"8081" with services, we are specifying that we want these two containers since w

creating docker image

1. create a file Dockerfile with the following code # use an existing docker image as base FROM alpine #download and install a dependency RUN apk add --update redis apk here is package manager of alpine and from there we can install new programs #tell the image what to do when it starts as container CMD ["redis-server"] 2.  execute command -> docker build . will return an id at the end 3. docker run id Tagging an image when building docker image 1. docker build -t stelengeider/redis:latest .  stelengeider/redis:latest is the tag of the image stelengeider -> your docker id redis -> repo or project name latest -> version name now we can run image by the image name docker run stephengrider/redis:latest we dont need to specify the version at the end .. and if we dont specify it by default takes the latest version docker build busybox -> have simpler image names because they are the community images and open source Relationship

Docker commands summary person notes

1.To execute something inside of docker image eg. a. docker run busybox echo hi there b. docker run busybox ls but this will not work with the famous hello-world image Reason -> it doesn't contain those program commands ie ls , echo ... it was just designed for the minimal way to print hello world 2. List all running dockers (presently running) docker ps so when we ran docker run hello-world kinda .. it immediately ran and turned off 3. to see all the dockers that ever ran on the machine docker ps --all Lifecycle 4. docker run = docker create + docker start creating a container is equivalent to preparing the filesystem snapshot to start we run to start-up command executing them individually 1. docker create hello-world returns a id --- > something like sfsf4tre32423sdfsf To run it 2.  docker start -a id id will the id we got when we created the docker -a help to watch the output and print output in the terminal by default docker start will not