Search code examples
jqueryajaxgoogle-crawlers

Google's #! Ajax Implementation - Not working with jQuery


OK, I'm banging my head against the desk and obviously missing something simple.

Trying to make my Ajax pages crawlable for google. However, it isn't working.

BTW, the content I call has links in it and serve as primary navigation.

    <script type="text/javascript">
      // Test script
     $.ajaxSetup({
    type: "GET",  
    url: "UpdateResults7.php#!",   
    /*dataType: "text/html",  eliminated moveing from jq 1.4.1 to 1.5*/
    beforeSend:  function(){$("#txtResult").html("Pending"); },
    success: function(html){       $("#txtResult").html(html);     }
    });  // close $.ajaxSetup

    function dynamic_Select( state)  
    {  
    var myData = {pass_type: "<?php echo $pass_type ?>", pass_state: state};
    //$.post("setSession.php", {pass_state: state });
    $.ajax({  
            data: myData });  // close $.ajax
    } // close dynamic_Select   
    </script>

Thanks in advance!


OK, Thanks for your help. Here is where I stand: 1.) My pages have been implemented with #! and the AJAX is working fine 2.) I went into firefox and copied and saved a HTML version of the file 3.) I put some script at the top of my php file to detect "?_escaped_fragment_=" and redirect to the html file.

However, when I check on Google's fetchbot, it doesn't display any of the AJAX generated content. I've even loaded the "?_escaped_fragment_=" into the address bar to confirm it loads the proper html copy, and indeed it does.

Here is the code at the top of the page:

    if (isset($_GET['_escaped_fragment_'])){
header(sprintf("Location: solidgreen-partners.html", $insertGoTo));
$path = $_SERVER['PHP_SELF'];
//generateStaticPHP($path);
    }

The last couple lines aren't working. I was trying to generate the HTML snapshot on the fly as described http://code.google.com/web/ajaxcrawling/docs/html-snapshot.html

Thanks again for your help!


Solution

  • It doesn't look like you have the right idea about how this is supposed to be implemented.

    The public-facing (or search-engine-facing) links on your site should employ the hashbang syntax (#!) where appropriate. This indicates to Google (and perhaps other search engines) that your site is AJAX-crawlable. Googlebot will then actually request those pages using a GET parameter called _escaped_fragment_. Your application must accept this parameter and utilize it to return an HTML snapshot to Googlebot.

    So an URL on your site such as:

    http://mydomain.com/mypage.html#!somevar=somevalue
    

    will actually be requested by Googlebot as:

    http://mydomain.com/mypage.html?_escaped_fragment_=somevar=somevalue
    

    Your app then takes that the value of _escaped_fragment_ parses out the parameters, builds, and then returns the appropriate HTML.

    None of this however applies to the actual AJAX calls you make internally on your site. Which is your problem here.

    See: http://code.google.com/web/ajaxcrawling/docs/getting-started.html for more info.