Get official website URL from company name ( even the company name is misspelled)

In this article, we will understand how we can extract all the links from a URL or an HTML document using Python.  

FIRST METHOD :

 

Libraries Required:

  • bs4 (BeautifulSoup): It is a library in python which makes it easy to scrape information from web pages, and helps in extracting the data from HTML and XML files. This library needs to be downloaded externally as it does not come readily with Python package. To install this library, type the following command in your terminal.
pip install bs4
  • requests: This library enables to send the HTTP requests and fetch the web page content very easily. This library also needs to be downloaded externally as it does not come readily with Python package. To install this library, type the following command in your terminal.
pip install requests

Steps to be followed:

  • Import the required libraries (bs4 and requests)
  • Create a function to get the HTML document from the URL using requests.get() method by passing URL to it.
  • Create a Parse Tree object i.e. soup object using of BeautifulSoup() method, passing it HTML document extracted above and Python built-in HTML parser.
  • Use the a tag to extract the links from the BeautifulSoup object.
  • Get the actual URLs from the form all anchor tag objects with get() method and passing href argument to it.
  • Moreover, you can get the title of the URLs with get() method and passing title argument to it.

Implementation:

 

# import necessary libraries 
from bs4 import BeautifulSoup 
import requests 
import re 
  
  
# function to extract html document from given url 
def getHTMLdocument(url): 
      
    # request for HTML document of given url 
    response = requests.get(url) 
      
    # response will be provided in JSON format 
    return response.text 
  
    
# assign required credentials 
# assign URL 
url_to_scrape = "https://www.codespeedy.com/"
  
# create document 
html_document = getHTMLdocument(url_to_scrape) 
  
# create soap object 
soup = BeautifulSoup(html_document, 'html.parser') 
  
  
# find all the anchor tags with "href"  
# attribute starting with "https://" 
for link in soup.find_all('a',  
                          attrs={'href': re.compile("^https://")}): 
    # display the actual urls 
    print(link.get('href'))

ALSO, Another method:

How to find all links using BeautifulSoup and Python?

 

You can find all of the links, anchor <a> elements, on a web page by using the find_all function of BeautifulSoup4, with the tag "a" as a parameter for the function.

 

import requests
from bs4 import BeautifulSoup

response = requests.get("https://www.scrapingbee.com/blog/")
soup = BeautifulSoup(response.content, 'html.parser')

links = soup.find_all("a") # Find all elements with the tag <a>
for link in links:
  print("Link:", link.get("href"), "Text:", link.string)

 

 

 

 

 

 

 

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top