Search code examples
rubynokogirirest-client

Webcrawler skipping URLS


I'm writing a program that scans for vulnerable websites, I happen to know that there are a couple sites that have vulnerabilities, and return a SQL syntax error, however, when I run the program, it skips over these sites and doesn't output that they where found or output that they where saved into a file. This program is being used for pentesting and all owners of sites are made aware of the vulnerability.

Source:

def get_urls
  info("Searching for possible SQL vulnerable sites.")
  @agent = Mechanize.new
  page = @agent.get('http://www.google.com/')
  google_form = page.form('f')
  google_form.q = "#{SEARCH}"
  url = @agent.submit(google_form, google_form.buttons.first)
  url.links.each do |link|
    if link.href.to_s =~ /url.q/
      str = link.href.to_s
      str_list = str.split(%r{=|&})
      urls = str_list[1]
      next if str_list[1].split('/')[2] == "webcache.googleusercontent.com"
      urls_to_log = urls.gsub("%3F", '?').gsub("%3D", '=')
      success("Site found: #{urls_to_log}")
      File.open("#{PATH}/temp/SQL_sites_to_check.txt", "a+") {|s| s.puts("#{urls_to_log}'")}
    end
  end
  info("Possible vulnerable sites dumped into #{PATH}/temp/SQL_sites_to_check.txt")
end

def check_if_vulnerable
  info("Checking if sites are vulnerable.")
  IO.read("#{PATH}/temp/SQL_sites_to_check.txt").each_line do |parse|
    begin
      Timeout::timeout(5) do
        parsing = Nokogiri::HTML(RestClient.get("#{parse.chomp}")) 
      end
    rescue Timeout::Error, RestClient::ResourceNotFound, RestClient::SSLCertificateNotVerified, Errno::ECONNABORTED, Mechanize::ResponseCodeError, RestClient::InternalServerError => e
      if e
        warn("URL: #{parse.chomp} failed with error: [#{e}] dumped to non_exploitable.txt")
        File.open("#{PATH}/lib/non_exploitable.txt", "a+"){|s| s.puts(parse)}
      else 
        success("SQL syntax error discovered in URL: #{parse.chomp} dumped to SQL_VULN.txt")
        File.open("#{PATH}/lib/SQL_VULN.txt", "a+"){|vuln| vuln.puts(parse)}
      end
    end
  end
end

Example of usage:

[22:49:29 INFO]Checking if sites are vulnerable.
[22:49:53 WARNING]URL: http://www.police.bd/content.php?id=275' failed with error: [execution expired] dumped to non_exploitable.txt

File containing the URLs:

http://www.bible.com/subcat.php?id=2'
http://www.cidko.com/pro_con.php?id=3'
http://www.slavsandtat.com/about.php?id=25'
http://www.police.bd/content.php?id=275'
http://www.icdcprage.org/index.php?id=10'
http://huawei.com/en/plugin.php?id=hwdownload'
https://huawei.com/en/plugin.php?id=unlock'
https://facebook.com/profile.php?id'
http://www.footballclub.com.au/index.php?id=43'
http://www.mesrs.qc.ca/index.php?id=1525'

As you can see the program skips over 3 URLs and goes straight to the fourth one, why?

Am I doing something wrong to where this will happen?


Solution

  • I'm not sure if that rescue block is where it should be. You are not doing anything with the content you fetch in parsing = Nokogiri::HTML(RestClient.get("#{parse.chomp}")) and for the first three it maybe just works hence no exception and no error output. Add some output after that line to see them being fetched.