Backup crash

Nov 17, 2015 at 9:36 AM
I am trying to index around 5M docs in 6 views. Everything is going well, it is quite fast (around 10-15 minutes). When I am trying to shut down RaptorDB instance, I am getting a crash during backup:
** Exception(Stack trace(terminating: True):    at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
   at System.IO.File.InternalMove(String sourceFileName, String destFileName, Boolean checkHost)
   at RaptorDB.RaptorDB.Backup()
   at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
   at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
   at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
   at System.Threading.ThreadHelper.ThreadStart() ):[Could not find a part of the path.], SRC:[mscorlib]
And the inner exception is:
 ** Exception- : Could not find a part of the path. [mscorlib]
==  (System.IO.DirectoryNotFoundException)========
Exception:
--------------------
Could not find a part of the path.
--------------------
   at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
   at System.IO.File.InternalMove(String sourceFileName, String destFileName, Boolean checkHost)
   at RaptorDB.RaptorDB.Backup()
   at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
   at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
   at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state)
   at System.Threading.ThreadHelper.ThreadStart()
=====================
I tried to debug it on my local machine, but it starts to "eat" my memory, the machine becomes barely usable and I cannot rely on it. So, I have to build it in release mode, push it on a server and run it there.

This exception is not always triggered. 1 out of 2 times. When the number of documents is smaller, this doesn't happen.

Disk is not used and I have few hundreds GB available, so space should not be a problem. Machine (a physical machine, not a VM) is also free, no other processes than my test.

My question is:
  • I don't need backup. Can I disable it?
  • except adding data to views, I am not doing anything else. Just the regular loop of
while (dataReader.Read()) {raptorDB.Add(...);
from multiple record sets. Should I do something else? enable some flags?
  • anything else?
Nov 17, 2015 at 9:57 AM
Thanks, I will look into the error.

Backup can't currently be disabled, but you can change the default of backup every hour to something like every year in raptordb.config (dec 1st 00:00) :
...
   "BackupCronSchedule" : "0 0 1 12 *", // min hour dayofmonth month weekday
...
Nov 17, 2015 at 9:57 AM
I took a closer look to the repository and I noticed that the following folders were not created (compared with other repositories I opened):
  • Backup
  • Restore
  • Temp
And files:
  • -RaptorDB.config
  • -RaptorDB-Branch.config
  • -RaptorDB-Replication.config
are also missing ...
Nov 18, 2015 at 1:06 PM
Do you think it has anything to do with size? I am testing with a "good" version (one that does not crash at the end) and I am trying to open it, it take 15-16 minutes. Is this normal? Do you think it matters in any way that I am generating the "database" on one machine, with one project and reading it from another machine with another project, but under the hood they're using the same lib where I have the documents/views/schemas defined.

se
Nov 24, 2015 at 4:33 PM
I think it might have been related to extra data-types (like sbyte/*) which were not well handled in binary json serializer. Anyway, I reverted everything to "supported" types and now it seems to be fine. However, I have other issues with backup (see new thread ;-) ).