Search code examples
javamysqlweb-crawlerplagiarism-detection

Plagiarism Analyzer (compared against Web Content)


Hi everyone all over the world,

Background

I am a final year student of Computer Science. I've proposed my Final Double Module Project which is a Plagiarism Analyzer, using Java and MySQL.

The Plagiarism Analyzer will:

  1. Scan all the paragraphs of uploaded document. Analyze percentage of each paragraph copied from which website.
  2. Highlight only the words copied exactly from which website in each paragraph.

My main objective is to develop something like Turnitin, improved if possible.

I have less than 6 months to develop the program. I have scoped the following:

  1. Web Crawler Implementation. Probably will be utilizing Lucene API or developing my own Crawler (which one is better in terms of time development and also usability?).
  2. Hashing and Indexing. To improve on the searching and analyzing.

Questions

Here are my questions:

  1. Can MySQL store that much information?
  2. Did I miss any important topics?
  3. What are your opinions concerning this project?
  4. Any suggestions or techniques for performing the similarity analysis?
  5. Can a paragraph be hashed, as well as words?

Thanks in advance for any help and advice. ^^


Solution

  • Have you considered another project that isn't doomed to failure on account of lack of resources available to you?

    If you really want to go the "Hey, let's crawl the whole web!" route, you're going to need to break out things like HBase and Hadoop and lots of machines. MySQL will be grossly insufficient. TurnItIn claims to have crawled and indexed 12 billion pages. Google's index is more like [redacted]. MySQL, or for that matter, any RDBMS, cannot scale to that level.

    The only realistic way you're going to be able to pull this off is if you do something astonishingly clever and figure out how to construct queries to Google that will reveal plagiarism of documents that are already present in Google's index. I'd recommend using a message queue and access the search API synchronously. The message queue will also allow you to throttle your queries down to a reasonable rate. Avoid stop words, but you're still looking for near-exact matches, so queries should be like: "* quick brown fox jumped over * lazy dog" Don't bother running queries that end up like: "* * went * * *" And ignore results that come back with 94,000,000 hits. Those won't be plagiarism, they'll be famous quotes or overly general queries. You're looking for either under 10 hits or a few thousand hits that all have an exact match on your original sentence or some similar metric. And even then, this should just be a heuristic — don't flag a document unless there are lots of red flags. Conversely, if everything comes back as zero hits, they're being unusually original. Book search typically needs more precise queries. Sufficiently suspicious stuff should trigger HTTP requests for the original pages, and final decisions should always be the purview of a human being. If a document cites its sources, that's not plagiarism, and you'll want to detect that. False positives are inevitable, and will likely be common, if not constant.

    Be aware that the TOS prohibit permanently storing any portion of the Google index.

    Regardless, you have chosen to do something exceedingly hard, no matter how you build it, and likely very expensive and time-consuming unless you involve Google.