I just want to know what is your opinion about how to fingerprint/verify html/links structure.
The problem I want to solve is: fingerprint for example 10 different sites, html pages. And after some time I want to have possibility to verify them, so is, if site has been changed, links changed, verification fails, othervise verification success. My base Idea is to analyze link structure by splitting it in some way, doing some kind of tree, and from that tree generate some kind of code. But I'm still in brainstorm stage, where I need to discuss this with someone, and know other ideas.
So any ideas, algos, and suggestions would be usefull.
Whatever data or structure you intend to hash, summarize and otherwise fingerprint, be sure to account for the various forms of noise on many of the web sites "out-there".
Example of such noise or random content are: