By Sai Lokesh
The project takes website url and number of images to be downloaded as input and automatically downloads the images in to the given directory.
The project is a webscrapping model.it takes website url and no.of images to be downloaded as input and automatically downloads the images in to the given directory.If the number of images is given 0 it will download all the images present in the website.It is written in python.The dependencies are requests,Beautifulsoup,os.
1.With the help of the requests we can able to get the data from the website using get() function
2.By using Beautifulsoup you can parse or traverse to the whole html code of the website.It can be achieved by creating an object of Beautifulsoup class and giving parameters as requests.text and 'html_parser'.Requests.text is used to open it in text form and html_parser is used to traverse through the html code.
3.Now we can access the tags of the html code using the object of Beautifulsoup class by using object.find_all('tags')
4.Here we will access the image tags and we have to get the 'src' tag i.e link of that image and append it to the list
5.create the directory by using os.mkdir('directory_name')
6.Now iterate through the list and get the content from each link.That can be done by using requests.get(link).content
7.Download that content i.e image to the directory we created in step 5
8.step 7 can be achieved by opening the file in write format.We can give extension like .jpg or.png9
9.if all goes well then it will display 'Downloaded successfully'
Submitted by Sai Lokesh (sailokeshvadlamudi)
Download packets of source code on Coders Packet
Comments