which is faster, or is it just considered bad code
Lets say that we have a MySQL backend with a table having its primary-key
defined with the UNIQUE property. We are receiving data from multiple
distributed systems all having a the same/similar implementation..
At some point we will try and batch insert e.g. 10 million documents-rows,
but we only want to store the data if we do not violate the unique
constraint, which approach would be faster/considered ok..?
e.g
try {
//...try and insert the document
} catch(MySQLIntegrityConstraintViolationException e) {
//..do nothing, since this is already stored in the database
//move on to the next one..
}
or
//we try to find the document...
if(!documentFound) {
//we did not find a document with this id, so we can safely insert it..
//move on to the next one...
}
In my head im guessing that in both cases the id we are trying to insert
has to be "found" since we have to validate the unique constraint, but
which of the two is considered more or less ok in relation to its speed?
Side question: Will the answer/result (in terms of for example speed) be
the same for example Mysql in relation to mongoDB?
No comments:
Post a Comment