In my nutch-site.xml, I add the following to stop truncating; however, during the fetch process, I get the following error. I want it to stop truncating and provide the results I need, which I assumed a -1 value would achieve. I'm using version 2.2.1. Any ideas?
<property>
<name>http.content.limit</name>
<value>-1</value>
<description>The length limit for downloaded content using the http
protocol, in bytes. If this value is nonnegative (>=0), content longer
than it will be truncated; otherwise, no truncation at all. Do not
confuse this setting with the file.content.limit setting.
</description>
</property>
Exception in thread "main" java.lang.RuntimeException: job failed: name=fetch, jobid=job_local1185573074_0001 at org.apache.nutch.util.NutchJob.waitForCompletion(NutchJob.java:55) at org.apache.nutch.fetcher.FetcherJob.run(FetcherJob.java:194) at org.apache.nutch.fetcher.FetcherJob.fetch(FetcherJob.java:219) at org.apache.nutch.fetcher.FetcherJob.run(FetcherJob.java:301) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.nutch.fetcher.FetcherJob.main(FetcherJob.java:307)
I solved this by removing the http.content.limit
section in nutch-site.xml and adding the parser.skip.truncated
and setting it to false.
<property>
<name>parser.skip.truncated</name>
<value>false</value>
<description>Boolean value for whether we should skip parsing for truncated documents. By default this
property is activated due to extremely high levels of CPU which parsing can sometimes take.
</description>
</property>