1. create a file Dockerfile with the following code
# use an existing docker image as base
FROM alpine
#download and install a dependency
RUN apk add --update redis
apk here is package manager of alpine and from there we can install new programs
#tell the image what to do when it starts as container
CMD ["redis-server"]
2. execute command -> docker build .
will return an id at the end
3. docker run id
Tagging an image when building docker image
1. docker build -t stelengeider/redis:latest .
stelengeider/redis:latest is the tag of the image
stelengeider -> your docker id
redis -> repo or project name
latest -> version name
now we can run image by the image name
docker run stephengrider/redis:latest
we dont need to specify the version at the end .. and if we dont specify it by default takes the latest version
docker build busybox -> have simpler image names because they are the community images and open source
Relationship between image and container
so till now, we have generated container out of image but the opposite is true too .. we can take a container and generate an image from it
So we can have a running container where can do things , modify it , install stuffs etc
creating a running container
eg. docker run -it alpine sh
modify it
apk add --update redis
open 2nd terminal
docker ps
to get the id of the running container
docker commit -c 'CMD ["redis-server"]' <id of running container>
the output is the id of the new image that we just customized
Problems with base image
if what we are trying to run is not present in the base image
eg. apline doesnt contain npm
we can now do 2 things
either get the base image which contains what we want
or
attempt to install what we want additionally inside the image
alpine is common name in the dev world of images for something which is as small as possible
so we prefer alpine version while downloading
eg. we had to download node , so we will search there if we have alpine among the different versions
and will download the alpine version as
FROM node:alpine
We need copy our work from hard disk to docker image. How to do it?
COPY ./ ./
1st ./ -> path to copy from your machine
2nd ./ -> path to copy to the inside of the container
we also need to ensure that our project file are available before we run npm install , so for that we put the copy above RUN npm install
build the server and try to run it with the image name:
Now there are few things still missing
We have the server running internally but we notice that we cannot access it on our browser
The reason we have not mapped the port
container port mapping
container has its own isolated set of ports
this configuration is a runtime config when we start the container
we -p 8080:8080
1st is localhost port
2nd is port of the container
port numbers do not need to be identical
to run shell terminal inside of the container
docker run -it kishlayraj2/simpleweb sh
one other mistake that we have done till now is that we directly copied our files without creating a new folder. This might cause conflict or over-writing
To address the above issue we have an option to create WORKDIR ie the working directory inside of the folder. we write this command above the copy command so that copy happens in the workdir. If the folder is not present it will create a new folder
when we open the container in the shell and we had used WORKDIR in the shell then again we would have the WORKDIR location in the shell as default location
Copy folder optimisation
If we make any change to any of our file -> the cache will become invalid and redo all the subsequent steps . Copy is necessary bcz next step is running package.json.
How to optimise this
First we only copy package.json , run package.json and at the end only when its necessary we copy all the files, In this we will be able to keep the caches even if we make changes in the file of the project
# use an existing docker image as base
FROM alpine
#download and install a dependency
RUN apk add --update redis
apk here is package manager of alpine and from there we can install new programs
#tell the image what to do when it starts as container
CMD ["redis-server"]
2. execute command -> docker build .
will return an id at the end
3. docker run id
Tagging an image when building docker image
1. docker build -t stelengeider/redis:latest .
stelengeider/redis:latest is the tag of the image
stelengeider -> your docker id
redis -> repo or project name
latest -> version name
now we can run image by the image name
docker run stephengrider/redis:latest
we dont need to specify the version at the end .. and if we dont specify it by default takes the latest version
docker build busybox -> have simpler image names because they are the community images and open source
Relationship between image and container
so till now, we have generated container out of image but the opposite is true too .. we can take a container and generate an image from it
So we can have a running container where can do things , modify it , install stuffs etc
creating a running container
eg. docker run -it alpine sh
modify it
apk add --update redis
open 2nd terminal
docker ps
to get the id of the running container
docker commit -c 'CMD ["redis-server"]' <id of running container>
the output is the id of the new image that we just customized
Problems with base image
if what we are trying to run is not present in the base image
eg. apline doesnt contain npm
we can now do 2 things
either get the base image which contains what we want
or
attempt to install what we want additionally inside the image
alpine is common name in the dev world of images for something which is as small as possible
so we prefer alpine version while downloading
eg. we had to download node , so we will search there if we have alpine among the different versions
and will download the alpine version as
FROM node:alpine
We need copy our work from hard disk to docker image. How to do it?
COPY ./ ./
1st ./ -> path to copy from your machine
2nd ./ -> path to copy to the inside of the container
we also need to ensure that our project file are available before we run npm install , so for that we put the copy above RUN npm install
build the server and try to run it with the image name:
docker build -t kishlayraj2/simpleweb .
docker run kishlayraj2/simpleweb
We have the server running internally but we notice that we cannot access it on our browser
The reason we have not mapped the port
container port mapping
container has its own isolated set of ports
this configuration is a runtime config when we start the container
we -p 8080:8080
1st is localhost port
2nd is port of the container
port numbers do not need to be identical
to run shell terminal inside of the container
docker run -it kishlayraj2/simpleweb sh
one other mistake that we have done till now is that we directly copied our files without creating a new folder. This might cause conflict or over-writing
To address the above issue we have an option to create WORKDIR ie the working directory inside of the folder. we write this command above the copy command so that copy happens in the workdir. If the folder is not present it will create a new folder
when we open the container in the shell and we had used WORKDIR in the shell then again we would have the WORKDIR location in the shell as default location
Copy folder optimisation
If we make any change to any of our file -> the cache will become invalid and redo all the subsequent steps . Copy is necessary bcz next step is running package.json.
How to optimise this
First we only copy package.json , run package.json and at the end only when its necessary we copy all the files, In this we will be able to keep the caches even if we make changes in the file of the project
Comments
Post a Comment