I have this following piece of code:
for(ArticleBasketInBean basketBean : bean.getBasket()) {
for(ArticleDTO article : dto.getArticleList()) {
if(basketBean.getArticleReference().equals(article.getArticleReference())) {
article.setAddedToBasket(true);
}
}
}
Clearly the time complexity of the above operation is O(n^2). For this case articleReference
is unique. So the list returned by bean.getBasket()
has no duplicate articleReference
, also this is true for the list returned by dto.getArticleList()
.
I want to avoid this nested iteration and want to write faster code. How can I do that?
If you don't have too many articles/baskets and as you say the references are unique you can do something like this; let's call R
the type of the reference, A
the type of articles, B
the type of baskets:
// since all references are unique we can use that, but see below
Map<R, A> mappedArticles = new IdentityHashMap<>();
// inject articles based on references in the map, then
A article;
for (B basket: bean.getBasket())
article = mappedArticles.get(basket.getArticleReferences());
if (article != null)
article.setAddedToBasket(true);
// Full list of articles is in the map's .values()
Notes: