Optimization of decoding large JPX images
We do not use large scales when decoding jpx images:
The JPEG2000 image is presented as a set of layers of different scales.
With the cp_reduce parameter, we tell OpenJPEG to ignore <cp_reduce> of
the largest scales. We define the number of layers to ignore as the
logarithm of the ratio the real image size to the requested page size.
This allows us not to waste time when decoding large scales on large
pictures.
We assume that it is possible to ignore all scales that are larger than
the requested rendering size.
Bug: pdfium:1924
Change-Id: I791aa63343f5c32657708003212c8007040e3bc8
Reviewed-on: https://pdfium-review.googlesource.com/c/pdfium/+/99970
Commit-Queue: Lei Zhang <thestig@chromium.org>
Reviewed-by: Tom Sepez <tsepez@chromium.org>
Reviewed-by: Lei Zhang <thestig@chromium.org>
diff --git a/fpdfsdk/fpdf_thumbnail.cpp b/fpdfsdk/fpdf_thumbnail.cpp
index 74c7831..250f732 100644
--- a/fpdfsdk/fpdf_thumbnail.cpp
+++ b/fpdfsdk/fpdf_thumbnail.cpp
@@ -67,7 +67,7 @@
std::move(thumb_stream));
const CPDF_DIB::LoadState start_status = dib_source->StartLoadDIBBase(
false, nullptr, pdf_page->GetPageResources().Get(), false,
- CPDF_ColorSpace::Family::kUnknown, false);
+ CPDF_ColorSpace::Family::kUnknown, false, {0, 0});
if (start_status == CPDF_DIB::LoadState::kFail)
return nullptr;