I am trying to retrieve the table containing tenders in the following website: https://wbgeconsult2.worldbank.org/wbgec/index.html#$h=1582042296662 (After clicking link, would need to click on 'Business Opportunities' at the top right to get to the table)
I tried using pandas read_html, Selenium and BeautifulSoup, all of which failed (they simply don't detect the table elements at all). I also tried to find a link in the networks tab of the dev tools, but none of them seem to work. Is this even possible? What am I doing wrong?
Here is my code:
from selenium import webdriver
from selenium.webdriver import ActionChains
import time
from bs4 import BeautifulSoup
import pandas as pd
import requests
from requests_html import HTMLSession
session = HTMLSession()
import re
URL='https://wbgeconsult2.worldbank.org/wbgec/index.html#$h=1582042296662'
#Enter Gecko driver path
driver=webdriver.Firefox(executable_path ='/Users/****/geckodriver')
driver.get(URL)
# driver.minimize_window()
opp_path='//*[@id="menu_publicads"]/a'
list_ch=driver.find_element_by_xpath(opp_path)
ActionChains(driver).click(list_ch).perform()
time.sleep(5)
sort_xpath='//*[@id="jqgh_selection_notification.publication_date"]'
list_ch=driver.find_element_by_xpath(sort_xpath)
ActionChains(driver).click(list_ch).perform()
time.sleep(5)
sort_xpath='//*[@id="jqgh_selection_notification.publication_date"]'
list_ch=driver.find_element_by_xpath(sort_xpath)
ActionChains(driver).click(list_ch).perform()
time.sleep(5)
re=requests.get(URL)
soup=BeautifulSoup(re.content,'lxml')
row=soup.findAll('td')
print(row)
ti=driver.find_elements_by_xpath('//tr')
for t in ti:
print(ti.text)
The data is loaded from external URL via XML request. You can use this example how to load and parse the data into a DataFrame:
import requests
import pandas as pd
from bs4 import BeautifulSoup
url = "https://wbgeconsult2.worldbank.org/wbgect/gwproxy"
data = """<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"><soapenv:Body><GetCurrentPublicNotifications xmlns="http://cordys.com/WBGEC/DBT_Selection_Notification/1.0"><NotifTypeId3 xmlns="undefined">3</NotifTypeId3><DS type="dsort"><selection_notification.eoi_deadline order="asc"></selection_notification.eoi_deadline></DS></GetCurrentPublicNotifications></soapenv:Body></soapenv:Envelope>"""
soup = BeautifulSoup(requests.post(url, data=data).content, "xml")
# uncomment this to print all data:
# print(soup.prettify())
data = []
for sn in soup.select("SELECTION_NOTIFICATION"):
d = {}
for tag in sn.find_all(recursive=False):
d[tag.name] = tag.get_text(strip=True)
data.append(d)
df = pd.DataFrame(data)
print(df)
df.to_csv("data.csv", index=False)
Prints:
ID PUBLICATION_DATE EOI_DEADLINE LANGUAGE_OF_NOTICE ADVERTISE_UNTIL TITLE SELECTION_TYPE_NAME SELECTION_TYPE_ID SELECTION_NUMBER SOLICITATION_OR_FRAMEWORK SELECTION_STATUS_ID SELECTION_SUB_STATUS_ID
0 148625 2021-04-16T00:00:00.0 2021-04-26T23:59:59.900000000 English 2021-04-26T23:59:59.0 Zanzibar PPP Diagnostic and Pipeline Firm 2 1274225 2 8
1 148536 2021-04-14T00:00:00.0 2021-04-26T23:59:59.900000000 English 2021-04-26T23:59:59.0 Assessment of Institutional Capacity for Imple... Firm 2 1274123 2 8
2 148310 2021-04-12T00:00:00.0 2021-04-26T23:59:59.900000000 English 2021-04-26T23:59:59.0 Albania Digital Jobs Pilot Firm 2 1273851 2 8
3 148399 2021-04-12T00:00:00.0 2021-04-26T23:59:59.900000000 English 2021-04-26T23:59:59.0 EaP - Green Financing for Transport Infrastruc... Firm 2 1273953 2 8
4 148448 2021-04-12T00:00:00.0 2021-04-26T23:59:59.900000000 English 2021-04-26T23:59:59.0 Surveying LGBTI people in North Macedonia and ... Firm 2 1274001 2 8
5 148277 2021-04-14T00:00:00.0 2021-04-26T23:59:59.900000000 English 2021-04-26T23:59:59.0 SME FINANCE FORUM 2021 WEBSITES REVAMP Firm 2 1273810 2 8
...
and saves data.csv
(screenshot from LibreOffice):