In a meeting a few days ago someone said that "using Hekaton increases the size of the transaction logs which increases the time to fail over" while describing the challenges of an AlwaysOn SQL cluster that uses Hekaton in-memory tables. I'm not a SQL expert so wondering if this is a true statement and if so, what is going on to make Hekaton transaction logs larger than they would be without Hekaton?
I believe that the exact opposite is the case. The transaction record is a logical transaction that describes the transaction rather than all of the modifications to the indexes that go along with a non In-Memory table.
The log contains the logical effects of committed transactions sufficient to redo the transaction. The changes are recorded as insertions and deletions of row versions labeled with the table they belong to. No undo information is logged.
Hekaton index operations are not logged. All indexes are reconstructed on recovery.
Checkpoints are in effect a compressed representation of the log. Checkpoints allow the log to be truncated and improve crash recovery performance.
Since the tail of the transaction log is typically a bottleneck, reducing the number of log records appended to the log can improve scalability and significantly increase efficiency. Furthermore the content of the log for each transaction requires less space than systems that generate one log record per operation
The Microsoft document regarding Hekaton includes these details.
http://research.microsoft.com/pubs/193594/Hekaton%20-%20Sigmod2013%20final.pdf