Skip to main content

How To Scrape MercadoLibre With Python And Beautiful Soup?

 

How To Scrape MercadoLibre With Python And Beautiful Soup

In this blog, you will come to know about how we can scrape MercadoLibre product data using Python and BeautifulSoup.

The blog aims is to be up-to-date and you will get every particular result in real-time.

First, you need to install Python 3. If not, you can just get Python 3 and get it installed before you proceed. Then you need to install beautiful soup with pip3 install beautifulsoup4.

We will require the library’s requests, soupsieve, and lxml to collect data, break it down to XML, and use CSS selectors. Install them using.

pip3 install requests soupsieve lxml

Once installed, open an editor and type in.

# -*- coding: utf-8 -*-

from bs4 import BeautifulSoup

import requests

Now let’s go to the MercadoLibre search page and inspect the data we can get

This is how it looks.

How to Scrape MercadoLibre with Python and Beautiful Soup

Back to our code now. Let’s try and get this data by pretending we are a browser like this.

# -*- coding: utf-8 -*-

from bs4 import BeautifulSoup

import requestsheaders = {'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/601.3.9 (KHTML, like Gecko) Version/9.0.2 Safari/601.3.9'}

url='https://listado.mercadolibre.com.mx/phone#D[A:phone]'

response=requests.get(url,headers=headers)

print(response)

Save this as scrapeMercado.py.

If you run it

python3 scrapeMercado.py

You will see the whole HTML page.

Now, let’s use CSS selectors to get to the data we want. To do that, let’s go back to Chrome and open the inspect tool. We now need to get to all the articles. We notice that class ‘.results-item.’ holds all the individual product details together.

How to Scrape MercadoLibre with Python and Beautiful Soup

If you notice that the article title is contained in an element inside the results-item class, we can get to it like this.

# -*- coding: utf-8 -*- from bs4 import BeautifulSoup import requests headers = {'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/601.3.11 (KHTML, like Gecko) Version/9.0.2 Safari/601.3.9', 'Accept-Encoding': 'identity' } #'Accept-Encoding': 'identity'url = 'https://listado.mercadolibre.com.mx/phone#D[A:phone]' response=requests.get(url,headers=headers) #print(response.content) soup=BeautifulSoup(response.content,'lxml') for item in soup.select('.results-item'): try: print('---------------------------') print(item.select('h2')[0].get_text()) except Exception as e: #raise e print('')

This selects all the pb-layout-item article blocks and runs through them, looking for the element and printing its text.

So when you run it, you get the product title

Now with the same process, we get the class names of all the other data like product image, the link, and price.

# -*- coding: utf-8 -*- from bs4 import BeautifulSoup import requests headers = {'User-Agent':'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_2) AppleWebKit/601.3.11 (KHTML, like Gecko) Version/9.0.2 Safari/601.3.9', 'Accept-Encoding': 'identity' } #'Accept-Encoding': 'identity' url = 'https://listado.mercadolibre.com.mx/phone#D[A:phone]' response=requests.get(url,headers=headers) #print(response.content) soup=BeautifulSoup(response.content,'lxml') for item in soup.select('.results-item'): try: print('---------------------------') print(item.select('h2')[0].get_text()) print(item.select('h2 a')[0]['href']) print(item.select('.price__container .item__price')[0].get_text()) print(item.select('.image-content a img')[0]['data-src']) except Exception as e: #raise e print('')

What we run, should print everything we need from each product like this.

If you need to utilize this in production and want to scale to thousands of links, then you will get that you will get IP blocked rapidly by MercadoLibre. In this scenario, using a rotating proxy service to rotate IPs is a must. You can use a service like Proxies API to route your calls through a pool of millions of residential proxies.

If you need to scale the crawling speed and don’t want to set up your infrastructure, you can utilize our Cloud-based crawler by Web Screen Scraping to easily crawl thousands of URLs at high speed from our network of crawlers.

If you are looking for the best MercadoLibre with Python and Beautiful Soup, then you can contact Web Screen Scraping for all your requirements.

Comments

Popular posts from this blog

How to Scrape Glassdoor Job Data using Python & LXML?

  This Blog is related to scraping data of job listing based on location & specific job names. You can extract the job ratings, estimated salary, or go a bit more and extract the jobs established on the number of miles from a specific city. With extraction Glassdoor job, you can discover job lists over an assured time, and identify job placements that are removed &listed to inquire about the job that is in trend. In this blog, we will extract Glassdoor.com, one of the quickest expanding job hiring sites. The extractor will scrape the information of fields for a specific job title in a given location. Below is the listing of Data Fields that we scrape from Glassdoor: Name of Jobs Company Name State (Province) City Salary URL of Jobs Expected Salary Client’s Ratings Company Revenue Company Website Founded Years Industry Company Locations Date of Posted Scraping Logics First, you need to develop the URL to find outcomes from Glassdoor. Meanwhile, we will be scraping lists by j...

How To Use Python To Scrape IMDB Movie Data From The Web ?

  We all are always eager to know the best movie or the best comedy show of all time. For all such confusions, reviews, ratings, and people all over the world utilize IMDB, an online library of such material, for trivia linked to the world of movies and television. While people add the information, the database is owned and administered by an Amazon subsidiary. It began as a database in 1990 and was converted to the web in 1993. While anybody can examine the material on the website, if you want to make changes to the facts or add reviews, you must first register. In this blog, we'll look at how to use  Python  to scrape IMDB movie data from the web. IMDB allows users to give ratings to movies and small screen shows, and these ratings have provided the basis of several lists used by movie fans and many others to establish a personal hit list. While IMDB doesn't give an API for querying its data, it does provide a textual download option. A DIY code can also be used to scra...

How Social Media Marketing Company Uses Web Scraping Services?

  Home   Company   Services   Industries   Blog   Contact Us How Social Media Marketing Company Uses Web Scraping Services? Home   How Social Media Marketing Company Uses Web Scraping Services? JUNE 03, 2022 In last several decades, the globe has witnessed the transformation of technology and the beginning of the new digital era. In recent years, enormous transformations and changes have found a significant impact on people's lives and society. The internet has fundamentally transformed the economics that drives the society, in addition to other social advancements. The economy has been impacted by the Internet. Due to advancement in technology every sector has influenced. Web Scraping The change in the technology have developed many untapped sectors. Web scraping also known as data crawling is one of the most prominent aspects of the twenty-first century's new technology. Web scraping is a way of extracting data from the internet and saving it in an o...