Web Scraping Using Python

Python Programming Training Certification

Flexible Hours

100 Assignments

Instructor Led online Training

50 LMS Access

24X7 Support

100% Skill Level

Enquire Now

4.9 out of 1000+ Ratings
Best Python Institute for Learning Python Course & Training, Live Project Training in Python with Django, Data Science and AI, Interview Assistance, Expert Coaching Trainers. Python Certification & Interview Assistance! Get free demo now!

Course Overview

Python is one of the world’s top programming languages used today and Python training has become the most popular training across individuals. Training Basket’s Python Training & Certification course covers basic and advanced Python concepts and how to apply them in real-world applications.Python is a flexible and powerful open-source language that is easy to learn and consists of powerful libraries for data analysis and manipulation. Our Python training course content is curated by experts as per the standard Industry curriculum. The curriculum, coding challenges and real-life problems cover data operations in Python, strings, conditional statements, error handling, shell scripting, web scraping and the commonly used Python web framework Django. Take this Python training and certification course and become job-ready now.

Web Scraping Using Python

What is Web Scraping?

Web Scraping is a technique to extract a large amount of data from several websites. The term “scraping” refers to obtaining the information from another source (webpages) and saving it into a local file. For example: Suppose you are working on a project called “Phone comparing website,” where you require the price of mobile phones, ratings, and model names to make comparisons between the different mobile phones. If you collect these details by checking various sites, it will take much time. In that case, web scrapping plays an important role where by writing a few lines of code you can get the desired results.

Web Scraping Using Python

Web Scrapping extracts the data from websites in the unstructured format. It helps to collect these unstructured data and convert it in a structured form.

Startups prefer web scrapping because it is a cheap and effective way to get a large amount of data without any partnership with the data selling company.

Is Web Scrapping legal?

Here the question arises whether the web scrapping is legal or not. The answer is that some sites allow it when used legally. Web scraping is just a tool you can use it in the right way or wrong way.

Web scrapping is illegal if someone tries to scrap the nonpublic data. Nonpublic data is not reachable to everyone; if you try to extract such data then it is a violation of the legal term.

There are several tools available to scrap data from websites, such as:

  • Scrapping-bot
  • Scrapper API
  • Octoparse
  • Import.io
  • Webhose.io
  • Dexi.io
  • Outwit
  • Diffbot
  • Content Grabber
  • Mozenda
  • Web Scrapper Chrome Extension

Why Web Scrapping?

Web Scraping Using Python

As we have discussed above, web scrapping is used to extract the data from websites. But we should know how to use that raw data. That raw data can be used in various fields. Let’s have a look at the usage of web scrapping:

  • Dynamic Price Monitoring

It is widely used to collect data from several online shopping sites and compare the prices of products and make profitable pricing decisions. Price monitoring using web scrapped data gives the ability to the companies to know the market condition and facilitate dynamic pricing. It ensures the companies they always outrank others.

  • Market Research

eb Scrapping is perfectly appropriate for market trend analysis. It is gaining insights into a particular market. The large organization requires a great deal of data, and web scrapping provides the data with a guaranteed level of reliability and accuracy.

  • Email Gathering

Many companies use personals e-mail data for email marketing. They can target the specific audience for their marketing.

  • News and Content Monitoring

A single news cycle can create an outstanding effect or a genuine threat to your business. If your company depends on the news analysis of an organization, it frequently appears in the news. So web scraping provides the ultimate solution to monitoring and parsing the most critical stories. News articles and social media platform can directly influence the stock market.

  • Social Media Scrapping

Web Scrapping plays an essential role in extracting data from social media websites such as Twitter, Facebook, and Instagram, to find the trending topics.

  • Research and Development

The large set of data such as general information, statistics, and temperature is scrapped from websites, which is analyzed and used to carry out surveys or research and development.

Why use Python for Web Scrapping?

There are other popular programming languages, but why we choose the Python over other programming languages for web scraping? Below we are describing a list of Python’s features that make the most useful programming language for web scrapping.

  • Dynamically Typed

In Python, we don’t need to define data types for variables; we can directly use the variable wherever it requires. It saves time and makes a task faster. Python defines its classes to identify the data type of variable.

  • Vast collection of libraries

Python comes with an extensive range of libraries such as NumPy, Matplotlib, Pandas, Scipy, etc., that provide flexibility to work with various purposes. It is suited for almost every emerging field and also for web scrapping for extracting data and do manipulation.

  • Less Code

The purpose of the web scrapping is to save time. But what if you spend more time in writing the code? That’s why we use Python, as it can perform a task in a few lines of code.

  • Open-Source Community

Python is open-source, which means it is freely available for everyone. It has one of the biggest communities across the world where you can seek help if you get stuck anywhere in Python code.

The basics of web scraping

The web scrapping consists of two parts: a web crawler and a web scraper. In simple words, the web crawler is a horse, and the scrapper is the chariot. The crawler leads the scrapper and extracts the requested data. Let’s understand about these two components of web scrapping:

  • The crawler

Web Scraping Using Python A web crawler is generally called a “spider.” It is an artificial intelligence technology that browses the internet to index and searches for the content by given links. It searches for the relevant information asked by the programmer.

  • The scrapper
  • Web Scraping Using PythonA web scraper is a dedicated tool that is designed to extract the data from several websites quickly and effectively. Web scrappers vary widely in design and complexity, depending on the projects.

    How does Web Scrapping work?

    These are the following steps to perform web scraping. Let’s understand the working of web scraping.

    Step -1: Find the URL that you want to scrape

    First, you should understand the requirement of data according to your project. A webpage or website contains a large amount of information. That’s why scrap only relevant information. In simple words, the developer should be familiar with the data requirement.

    Step – 2: Inspecting the Page

    The data is extracted in raw HTML format, which must be carefully parsed and reduce the noise from the raw data. In some cases, data can be simple as name and address or as complex as high dimensional weather and stock market data.

    Step – 3: Write the code

    Write a code to extract the information, provide relevant information, and run the code.

    Step – 4: Store the data in the file

    Store that information in required csv, xml, JSON file format.

    Getting Started with Web Scrapping

    Python has a vast collection of libraries and also provides a very useful library for web scrapping. Let’s understand the required library for Python.

    Library used for web scrapping

    • Selenium- Selenium is an open-source automated testing library. It is used to check browser activities. To install this library, type the following command in your terminal.
    • pip install selenium

    Note – It is good to use the PyCharm IDE.

    Web Scraping Using Python

    • Pandas

    Pandas library is used for data manipulation and analysis. It is used to extract the data and store it in the desired format.

    • BeautifulSoup

    BeautifulSoup is a Python library that is used to pull data of HTML and XML files. It is mainly designed for web scrapping. It works with the parser to provide a natural way of navigating, searching, and modifying the parse tree. The latest version of BeautifulSoup is 4.8.1.

    Let’s understand the BeautifulSoup library in detail.

    Installation of BeautifulSoup

    You can install BeautifulSoup by typing the following command:

    • pip install bs4

    Installing a parser

    BeautifulSoup supports HTML parser and several third-party Python parsers. You can install any of them according to your dependency. The list of BeautifulSoup’s parsers is the following:

    Parser Typical usage
    Python’s html.parser BeautifulSoup(markup,”html.parser”)
    lxml’s HTML parser BeautifulSoup(markup,”lxml”)
    lxml’s XML parser BeautifulSoup(markup,”lxml-xml”)
    Html5lib BeautifulSoup(markup,”html5lib”)

    We recommend you to install html5lib parser because it is much suitable for the newer version of Python, or you can install lxml parser.

    Type the following command in your terminal:

    • pip install html5lib

    Web Scraping Using Python

    BeautifulSoup is used to transform a complex HTML document into a complex tree of Python objects. But there are a few essential types object which are mostly used:

    • Tag

    A Tag object corresponds to an XML or HTML original document.

    • soup = bs4.BeautifulSoup(“<b class = “boldest”>Extremely bold</b>)
      tag = soup.b
      type(tag)

    Output:

    • <class “bs4.element.Tag”>

    Tag contains lot of attributes and methods, but most important features of a tag are name and attribute.

    • Name

    Every tag has a name, accessible as .name:

    • tag.name
    • Attributes

    A tag may have any number of attributes. The tag <b id = “boldest”> has an attribute “id” whose value is “boldest”. We can access a tag’s attributes by treating the tag as dictionary.

    • tag[id]

    We can add, remove, and modify a tag’s attributes. It can be done by using tag as dictionary.

    • # add the element
      tag[‘id’] = ‘verybold’
      tag[‘another-attribute’] = 1
      tag
      # delete the tag
      del tag[‘id’]
    • Multi-valued Attributes

    In HTML5, there are some attributes that can have multiple values. The class (consists more than one css) is the most common multivalued attributes. Other attributes are rel, rev, accept-charset, headers, and accesskey.

    • class_is_multi= { ‘*’ : ‘class’}
      xml_soup = BeautifulSoup(‘<p class=”body strikeout”></p>’, ‘xml’, multi_valued_attributes=class_is_multi)
      xml_soup.p[‘class’]
      # [u’body’, u’strikeout’]
    • NavigableString

    A string in BeautifulSoup refers text within a tag. BeautifulSoup uses the NavigableString class to contain these bits of text.

    • tag.string
      # u’Extremely bold’
      type(tag.string)
      # <class ‘bs4.element.NavigableString’>

    A string is immutable means it can’t be edited. But it can be replaced with another string using replace_with().

    • tag.string.replace_with(“No longer bold”)
      tag

    In some cases, if you want to use a NavigableString outside the BeautifulSoup, the unicode() helps it to turn into normal Python Unicode string.

    • BeautifulSoup object

    The BeautifulSoup object represents the complete parsed document as a whole. In many cases, we can use it as a Tag object. It means it supports most of the methods described in navigating the tree and searching the tree.

    • doc=BeautifulSoup(“<document><content/>INSERT FOOTER HERE</document”,”xml”)
      footer=BeautifulSoup(“<footer>Here’s the footer</footer>”,”xml”)
      doc.find(text=”INSERT FOOTER HERE”).replace_with(footer)
      print(doc)

    Output:

    • ?xml version=”1.0″ encoding=”utf-8″?>
      # <document><content/><footer>Here’s the footer</footer></document>

    Web Scrapping Example:

    Let’s take an example to understand the scrapping practically by extracting the data from the webpage and inspecting the whole page.

    First, open your favorite page on Wikipedia and inspect the whole page, and before extracting data from the webpage, you should ensure your requirement. Consider the following code:

    • #importing the BeautifulSoup Library

      importbs4
      import requests

      #Creating the requests

      res = requests.get(“https://en.wikipedia.org/wiki/Machine_learning”)
      print(“The object type:”,type(res))

      # Convert the request object to the Beautiful Soup Object
      soup = bs4.BeautifulSoup(res.text,’html5lib’)
      print(“The object type:”,type(soup)

    Output:

    • The object type <class ‘requests.models.Response’>
      Convert the object into: <class ‘bs4.BeautifulSoup’>

    In the following lines of code, we are extracting all headings of a webpage by class name. Here front-end knowledge plays an essential role in inspecting the webpage.

    • soup.select(‘.mw-headline’)
      for i in soup.select(‘.mw-headline’):
      print(i.text,end = ‘,’)

    Output:

    • Overview,Machine learning tasks,History and relationships to other fields,Relation to data mining,Relation to optimization,Relation to statistics, Theory,Approaches,Types of learning algorithms,Supervised learning,Unsupervised learning,Reinforcement learning,Self-learning,Feature learning,Sparse dictionary learning,Anomaly detection,Association rules,Models,Artificial neural networks,Decision trees,Support vector machines,Regression analysis,Bayesian networks,Genetic algorithms,Training models,Federated learning,Applications,Limitations,Bias,Model assessments,Ethics,Software,Free and open-source software,Proprietary software with free and open-source editions,Proprietary software,Journals,Conferences,See also,References,Further reading,External links,

    In the above code, we imported the bs4 and requested the library. In the third line, we created a res object to send a request to the webpage. As you can observe that we have extracted all heading from the webpage.

    Web Scraping Using Python

    Webpage of Wikipedia Learning

    Let’s understand another example; we will make a GET request to the URL and create a parse Tree object (soup) with the use of BeautifulSoup and Python built-in “html5lib” parser.

    Here we will scrap the webpage of given link (https://zeblearn.com/tutorials/). Consider the following code:

    • following code:
      # importing the libraries
      from bs4 import BeautifulSoup
      import requests

      url=”https://zeblearn.com/tutorials/”

      # Make a GET request to fetch the raw HTML content
      html_content = requests.get(url).text

      # Parse the html content
      soup = BeautifulSoup(html_content, “html5lib”)
      print(soup.prettify()) # print the parsed data of html

    The above code will display the all html code of javatpoint homepage.

    Using the BeautifulSoup object, i.e. soup, we can collect the required data table. Let’s print some interesting information using the soup object:

    • Let’s print the title of the web page.
    • print(soup.title)

    Output: It will give an output as follow:

    • <title>Tutorials List – Javatpoint</title>
    • In the above output, the HTML tag is included with the title. If you want text without tag, you can use the following code:
    • print(soup.title.text)

    Output: It will give an output as follow:

    • Tutorials List – Javatpoint
    • We can get the entire link on the page along with its attributes, such as href, title, and its inner Text. Consider the following code:
    • for link in soup.find_all(“a”):
      print(“Inner Text is: {}”.format(link.text))
      print(“Title is: {}”.format(link.get(“title”)))
      print(“href is: {}”.format(link.get(“href”)))

    Output: It will print all links along with its attributes. Here we display a few of them:

    • href is: https://www.facebook.com/javatpoint
      Inner Text is:
      The title is: None
      href is: https://twitter.com/pagejavatpoint
      Inner Text is:
      The title is: None
      href is: https://www.youtube.com/channel/UCUnYvQVCrJoFWZhKK3O2xLg
      Inner Text is:
      The title is: None
      href is: https://javatpoint.blogspot.com
      Inner Text is: Learn Java
      Title is: None
      href is: https://zeblearn.com/tutorials/java-tutorial
      Inner Text is: Learn Data Structures
      Title is: None
      href is: https://zeblearn.com/tutorials/data-structure-tutorial
      Inner Text is: Learn C Programming
      Title is: None
      href is: https://zeblearn.com/tutorials/c-programming-language-tutorial
      Inner Text is: Learn C++ Tutorial

    Demo: Scraping Data from Flipkart Website

    In this example, we will scrap the mobile phone prices, ratings, and model name from Flipkart, which is one of the popular e-commerce websites. Following are the prerequisites to accomplish this task:

    Prerequisites:

    • Python 2.x or Python 3.x with Selenium, BeautifulSoup, Pandas libraries installed.
    • Google – chrome browser
    • Scrapping Parser such as html.parser, xlml, etc.

    Step – 1: Find the desired URL to scrap

    The initial step is to find the URL that you want to scrap. Here we are extracting mobile phone details from the flipkart. The URL of this page is https://www.flipkart.com/search?q=iphones&otracker=search&otracker1=search&marketplace=FLIPKART&as-show=on&as=off.

    Step -2: Inspecting the page

    It is necessary to inspect the page carefully because the data is usually contained within the tags. So we need to inspect to select the desired tag. To inspect the page, right-click on the element and click “inspect”.

    Step – 3: Find the data for extracting

    Extract the Price, Name, and Rating, which are contained in the “div” tag, respectively.

    Step – 4: Write the Code

    • from bs4 import BeautifulSoupas soup
      from urllib.request import urlopen as uReq

      # Request from the webpage
      myurl = “https://www.flipkart.com/search?q=iphones&otracker=search&otracker1=search&marketplace=FLIPKART&as-show=on&as=off”

      uClient = uReq(myurl)
      page_html = uClient.read()
      uClient.close()

      page_soup = soup(page_html, features=”html.parser”)

      # print(soup.prettify(containers[0]))

      # This variable held all html of webpage
      containers = page_soup.find_all(“div”,{“class”: “_3O0U0u”})
      # container = containers[0]
      # # print(soup.prettify(container))
      #
      # price = container.find_all(“div”,{“class”: “col col-5-12 _2o7WAb”})
      # print(price[0].text)
      #
      # ratings = container.find_all(“div”,{“class”: “niH0FQ”})
      # print(ratings[0].text)
      #
      # #
      # # print(len(containers))
      # print(container.div.img[“alt”])

      # Creating CSV File that will store all data
      filename = “product1.csv”
      f = open(filename,”w”)

      headers = “Product_Name,Pricing,Ratings\n”
      f.write(headers)

      for container in containers:
      product_name = container.div.img[“alt”]

      price_container = container.find_all(“div”, {“class”: “col col-5-12 _2o7WAb”})
      price = price_container[0].text.strip()

      rating_container = container.find_all(“div”,{“class”:”niH0FQ”})
      ratings = rating_container[0].text

      # print(“product_name:”+product_name)
      # print(“price:”+price)
      # print(“ratings:”+ str(ratings))

      edit_price = ”.join(price.split(‘,’))
      sym_rupee = edit_price.split(“?”)
      add_rs_price = “Rs”+sym_rupee[1]
      split_price = add_rs_price.split(“E”)
      final_price = split_price[0]

      split_rating = str(ratings).split(” “)
      final_rating = split_rating[0]

      print(product_name.replace(“,”, “|”)+”,”+final_price+”,”+final_rating+”\n”)
      f.write(product_name.replace(“,”, “|”)+”,”+final_price+”,”+final_rating+”\n”)

      f.close()

    Output:

    Web Scraping Using Python

    We scrapped the details of the iPhone and saved those details in the CSV file as you can see in the output. In the above code, we put a comment on the few lines of code for testing purpose. You can remove those comments and observe the output.

    In this tutorial, we have discussed all basic concepts of web scrapping and described the sample scrapping from the leading online ecommerce site flipkart.

    Recently Trained Students

    Jessica Biel

    – Infosys

    My instructor had sound Knowledge and used to puts a lot of effort that made the course as simple and easy as possible. I was aiming for with the help of the ZebLearn Online training imparted to me by this organization.

    Richard Harris

    – ITC

    I got my training from Zeblearn in the Python Certification Training, I would like to say that say he is one of the best trainers. He has not even trained me but also motivated me to explore more and the way he executed the project, in the end, was mind-blowing.


    Candidate’s Journey During Our Training Program

    Card image cap

    Expert’s Advice & Selection of Module

    Choosing the right type of module for the training is half the battle & Our Team of experts will help & guide you.

    Card image cap

    Get Trained

    Get Trained & Learn End to End Implementation from our Expert Trainer who are working on the same domain.

    Card image cap

    Work on Projects

    We Do make our student’s work on multiple case studies , scenario based tasks & projects in order to provide real-time exposure to them.

    Card image cap

    Placements

    We have a dedicated placement cell in order to provide placement assistance & relevant interviews to our candididates till selection

    Placement Partner

    ×

    Leave your details

    ×

    Download Course Content



    wop;[\]