Is it possible to prevent a site from being scraped by any scrapers, but in the same time allow Search engines to parse your content.
Just checking for User Agent is not the best option, because it's very easy to simulate them.
JavaScript checks could be(Google execute JS) an option, but a good parser can do it too.
Any ideas?
Use DNS checking Luke! :)
Same idea provided in help article Verifying Googlebot by Google