Skip to main content

creating docker image

1. create a file Dockerfile with the following code

# use an existing docker image as base

FROM alpine

#download and install a dependency

RUN apk add --update redis

apk here is package manager of alpine and from there we can install new programs
#tell the image what to do when it starts as container

CMD ["redis-server"]

2.  execute command -> docker build .
will return an id at the end

3. docker run id


Tagging an image when building docker image
1. docker build -t stelengeider/redis:latest .

 stelengeider/redis:latest is the tag of the image

stelengeider -> your docker id
redis -> repo or project name
latest -> version name

now we can run image by the image name
docker run stephengrider/redis:latest
we dont need to specify the version at the end .. and if we dont specify it by default takes the latest version

docker build busybox -> have simpler image names because they are the community images and open source

Relationship between image and container

so till now, we have generated container out of image but the opposite is true too .. we can take a container and generate an image from it

So we can have a running container where can do things , modify it , install stuffs etc

creating a running container
eg. docker run -it alpine sh

modify it
apk add --update redis


open 2nd terminal

docker ps
to get the id of the running container

docker commit -c 'CMD ["redis-server"]' <id of running container>

the output is the id of the new image that we just customized



Problems with base image
 if what we are trying to run is not present in the base image
eg. apline doesnt contain npm
we can now do 2 things
either get the base image which contains what we want
or
attempt to install what we want additionally inside the image

alpine is common name in the dev world of images for something which is as small as possible
so we prefer alpine version while downloading
eg. we had to download node , so we will search there if we have alpine among the different versions
and will download the alpine version as
FROM node:alpine

We need copy our work from hard disk to docker image. How to do it?

COPY ./ ./
1st ./ -> path to copy from your machine
2nd ./ -> path to copy to the inside of the container

we also need to ensure that our project file are available before we run npm install , so for that we put the copy above RUN npm install

build the server and try to run it with the image name:

docker build -t kishlayraj2/simpleweb .
docker run kishlayraj2/simpleweb
   
Now there are few things still missing

We have the server running internally but we notice that we cannot access it on our browser
The reason we have not mapped the port

container port mapping
container has its own isolated set of ports
this configuration is a runtime config when we start the container
we -p 8080:8080
1st is localhost port
2nd is port of the container
port numbers do not need to be identical

to run shell terminal inside of the container
docker run -it kishlayraj2/simpleweb sh

one other mistake that we have done till now is that we directly copied our files without creating a new folder. This might cause conflict or over-writing
To address the above issue we have an option to create WORKDIR ie the working directory inside of the folder. we write this command above the copy command so that copy happens in the workdir. If the folder is not present it will create a new folder

when we open the container in the shell and we had used WORKDIR in the shell then again we would have the WORKDIR location in the shell as default location

Copy folder optimisation
If we make any change to any of our file -> the cache will become invalid and redo all the subsequent steps . Copy is necessary bcz next step is running package.json.
How to optimise this
First we only copy package.json , run package.json and at the end only when its necessary we copy all the files, In this we will be able to keep the caches even if we make changes in the file of the project

Comments

Popular posts from this blog

opening multiple ports tunnels ngrok in ubuntu

Location for the config yml file /home/example/.ngrok2/ngrok.yml content of config file authtoken: 4nq9771bPxe8ctg7LKr_2ClH7Y15Zqe4bWLWF9p tunnels: app-foo: addr: 80 proto: http host_header: app-foo.dev app-bar: addr: 80 proto: http host_header: app-bar.dev how to start ngrok with considering the config file: ngrok start --all

rename field in elastic Search

https://qiita.com/tkprof/items/e50368eb1473497a16d0 How to Rename an Elasticsearch field from columns: - {name: xxx, type: double} to columns: - {name: yyy, type: double} Pipeline API and reindex create a new Pipeline API : Rename Processor PUT _ingest/pipeline/pipeline_rename_xxx { "description" : "rename xxx", "processors" : [ { "rename": { "field": "xxx", "target_field": "yyy" } } ] } { "acknowledged": true } then reindex POST _reindex { "source": { "index": "source" }, "dest": { "index": "dest", "pipeline": "pipeline_rename_xxx" } }

Sumeru enterprise tiger privacy policy

Sumeru Enterprise Tiger Business Solutions Pvt. Ltd. Data Privacy Policy At Sumeru Enterprise Tiger Business Solutions Pvt. Ltd. we are committed to providing you with digitalization software products and services to meet your needs. Our commitment includes protecting personally identifiable information we obtain about you when you register to use one of our websites or become our customer (“Personal Information”). We want to earn your trust by providing strict safeguards to protect your Personal Information. This Policy applies to members, customers, former customers, users, and applicants. In the course of our business activities, Sumeru Enterprise Tiger Business Solutions Pvt. Ltd. collects, processes, and shares Personal Information. Indian law gives individuals the right to limit some but not all sharing. This Policy explains what Personal Information we collect, process, and share. We describe how we do so, and why. The Policy also describes your rights to access a...