hugo/resources/image.go
Bjørn Erik Pedersen 7285e74090
all: Rework page store, add a dynacache, improve partial rebuilds, and some general spring cleaning
There are some breaking changes in this commit, see #11455.

Closes #11455
Closes #11549

This fixes a set of bugs (see issue list) and it is also paying some technical debt accumulated over the years. We now build with Staticcheck enabled in the CI build.

The performance should be about the same as before for regular sized Hugo sites, but it should perform and scale much better to larger data sets, as objects that uses lots of memory (e.g. rendered Markdown, big JSON files read into maps with transform.Unmarshal etc.) will now get automatically garbage collected if needed. Performance on partial rebuilds when running the server in fast render mode should be the same, but the change detection should be much more accurate.

A list of the notable new features:

* A new dependency tracker that covers (almost) all of Hugo's API and is used to do fine grained partial rebuilds when running the server.
* A new and simpler tree document store which allows fast lookups and prefix-walking in all dimensions (e.g. language) concurrently.
* You can now configure an upper memory limit allowing for much larger data sets and/or running on lower specced PCs.
We have lifted the "no resources in sub folders" restriction for branch bundles (e.g. sections).
Memory Limit
* Hugos will, by default, set aside a quarter of the total system memory, but you can set this via the OS environment variable HUGO_MEMORYLIMIT (in gigabytes). This is backed by a partitioned LRU cache used throughout Hugo. A cache that gets dynamically resized in low memory situations, allowing Go's Garbage Collector to free the memory.

New Dependency Tracker: Hugo has had a rule based coarse grained approach to server rebuilds that has worked mostly pretty well, but there have been some surprises (e.g. stale content). This is now revamped with a new dependency tracker that can quickly calculate the delta given a changed resource (e.g. a content file, template, JS file etc.). This handles transitive relations, e.g. $page -> js.Build -> JS import, or $page1.Content -> render hook -> site.GetPage -> $page2.Title, or $page1.Content -> shortcode -> partial -> site.RegularPages -> $page2.Content -> shortcode ..., and should also handle changes to aggregated values (e.g. site.Lastmod) effectively.

This covers all of Hugo's API with 2 known exceptions (a list that may not be fully exhaustive):

Changes to files loaded with template func os.ReadFile may not be handled correctly. We recommend loading resources with resources.Get
Changes to Hugo objects (e.g. Page) passed in the template context to lang.Translate may not be detected correctly. We recommend having simple i18n templates without too much data context passed in other than simple types such as strings and numbers.
Note that the cachebuster configuration (when A changes then rebuild B) works well with the above, but we recommend that you revise that configuration, as it in most situations should not be needed. One example where it is still needed is with TailwindCSS and using changes to hugo_stats.json to trigger new CSS rebuilds.

Document Store: Previously, a little simplified, we split the document store (where we store pages and resources) in a tree per language. This worked pretty well, but the structure made some operations harder than they needed to be. We have now restructured it into one Radix tree for all languages. Internally the language is considered to be a dimension of that tree, and the tree can be viewed in all dimensions concurrently. This makes some operations re. language simpler (e.g. finding translations is just a slice range), but the idea is that it should also be relatively inexpensive to add more dimensions if needed (e.g. role).

Fixes #10169
Fixes #10364
Fixes #10482
Fixes #10630
Fixes #10656
Fixes #10694
Fixes #10918
Fixes #11262
Fixes #11439
Fixes #11453
Fixes #11457
Fixes #11466
Fixes #11540
Fixes #11551
Fixes #11556
Fixes #11654
Fixes #11661
Fixes #11663
Fixes #11664
Fixes #11669
Fixes #11671
Fixes #11807
Fixes #11808
Fixes #11809
Fixes #11815
Fixes #11840
Fixes #11853
Fixes #11860
Fixes #11883
Fixes #11904
Fixes #7388
Fixes #7425
Fixes #7436
Fixes #7544
Fixes #7882
Fixes #7960
Fixes #8255
Fixes #8307
Fixes #8863
Fixes #8927
Fixes #9192
Fixes #9324
2024-01-27 16:28:14 +01:00

522 lines
14 KiB
Go

// Copyright 2019 The Hugo Authors. All rights reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package resources
import (
"encoding/json"
"fmt"
"image"
"image/color"
"image/draw"
"image/gif"
_ "image/png"
"io"
"os"
"strings"
"sync"
color_extractor "github.com/marekm4/color-extractor"
"github.com/gohugoio/hugo/cache/filecache"
"github.com/gohugoio/hugo/common/hstrings"
"github.com/gohugoio/hugo/common/paths"
"github.com/gohugoio/hugo/identity"
"github.com/disintegration/gift"
"github.com/gohugoio/hugo/resources/images/exif"
"github.com/gohugoio/hugo/resources/internal"
"github.com/gohugoio/hugo/resources/resource"
"github.com/gohugoio/hugo/helpers"
"github.com/gohugoio/hugo/resources/images"
// Blind import for image.Decode
_ "golang.org/x/image/webp"
)
var (
_ images.ImageResource = (*imageResource)(nil)
_ resource.Source = (*imageResource)(nil)
_ resource.Cloner = (*imageResource)(nil)
_ resource.NameOriginalProvider = (*imageResource)(nil)
)
// imageResource represents an image resource.
type imageResource struct {
*images.Image
// When a image is processed in a chain, this holds the reference to the
// original (first).
root *imageResource
metaInit sync.Once
metaInitErr error
meta *imageMeta
dominantColorInit sync.Once
dominantColors []string
baseResource
}
type imageMeta struct {
Exif *exif.ExifInfo
}
func (i *imageResource) Exif() *exif.ExifInfo {
return i.root.getExif()
}
func (i *imageResource) getExif() *exif.ExifInfo {
i.metaInit.Do(func() {
supportsExif := i.Format == images.JPEG || i.Format == images.TIFF
if !supportsExif {
return
}
key := i.getImageMetaCacheTargetPath()
read := func(info filecache.ItemInfo, r io.ReadSeeker) error {
meta := &imageMeta{}
data, err := io.ReadAll(r)
if err != nil {
return err
}
if err = json.Unmarshal(data, &meta); err != nil {
return err
}
i.meta = meta
return nil
}
create := func(info filecache.ItemInfo, w io.WriteCloser) (err error) {
defer w.Close()
f, err := i.root.ReadSeekCloser()
if err != nil {
i.metaInitErr = err
return
}
defer f.Close()
x, err := i.getSpec().imaging.DecodeExif(f)
if err != nil {
i.getSpec().Logger.Warnf("Unable to decode Exif metadata from image: %s", i.Key())
return nil
}
i.meta = &imageMeta{Exif: x}
// Also write it to cache
enc := json.NewEncoder(w)
return enc.Encode(i.meta)
}
_, i.metaInitErr = i.getSpec().ImageCache.fcache.ReadOrCreate(key, read, create)
})
if i.metaInitErr != nil {
panic(fmt.Sprintf("metadata init failed: %s", i.metaInitErr))
}
if i.meta == nil {
return nil
}
return i.meta.Exif
}
// Colors returns a slice of the most dominant colors in an image
// using a simple histogram method.
func (i *imageResource) Colors() ([]string, error) {
var err error
i.dominantColorInit.Do(func() {
var img image.Image
img, err = i.DecodeImage()
if err != nil {
return
}
colors := color_extractor.ExtractColors(img)
for _, c := range colors {
i.dominantColors = append(i.dominantColors, images.ColorToHexString(c))
}
})
return i.dominantColors, nil
}
// Clone is for internal use.
func (i *imageResource) Clone() resource.Resource {
gr := i.baseResource.Clone().(baseResource)
return &imageResource{
root: i.root,
Image: i.WithSpec(gr),
baseResource: gr,
}
}
func (i *imageResource) cloneTo(targetPath string) resource.Resource {
gr := i.baseResource.cloneTo(targetPath).(baseResource)
return &imageResource{
root: i.root,
Image: i.WithSpec(gr),
baseResource: gr,
}
}
func (i *imageResource) cloneWithUpdates(u *transformationUpdate) (baseResource, error) {
base, err := i.baseResource.cloneWithUpdates(u)
if err != nil {
return nil, err
}
var img *images.Image
if u.isContentChanged() {
img = i.WithSpec(base)
} else {
img = i.Image
}
return &imageResource{
root: i.root,
Image: img,
baseResource: base,
}, nil
}
var imageActions = []string{images.ActionResize, images.ActionCrop, images.ActionFit, images.ActionFill}
// Process processes the image with the given spec.
// The spec can contain an optional action, one of "resize", "crop", "fit" or "fill".
// This makes this method a more flexible version that covers all of Resize, Crop, Fit and Fill,
// but it also supports e.g. format conversions without any resize action.
func (i *imageResource) Process(spec string) (images.ImageResource, error) {
action, options := i.resolveActionOptions(spec)
return i.processActionOptions(action, options)
}
// Resize resizes the image to the specified width and height using the specified resampling
// filter and returns the transformed image. If one of width or height is 0, the image aspect
// ratio is preserved.
func (i *imageResource) Resize(spec string) (images.ImageResource, error) {
return i.processActionSpec(images.ActionResize, spec)
}
// Crop the image to the specified dimensions without resizing using the given anchor point.
// Space delimited config, e.g. `200x300 TopLeft`.
func (i *imageResource) Crop(spec string) (images.ImageResource, error) {
return i.processActionSpec(images.ActionCrop, spec)
}
// Fit scales down the image using the specified resample filter to fit the specified
// maximum width and height.
func (i *imageResource) Fit(spec string) (images.ImageResource, error) {
return i.processActionSpec(images.ActionFit, spec)
}
// Fill scales the image to the smallest possible size that will cover the specified dimensions,
// crops the resized image to the specified dimensions using the given anchor point.
// Space delimited config, e.g. `200x300 TopLeft`.
func (i *imageResource) Fill(spec string) (images.ImageResource, error) {
return i.processActionSpec(images.ActionFill, spec)
}
func (i *imageResource) Filter(filters ...any) (images.ImageResource, error) {
var conf images.ImageConfig
var gfilters []gift.Filter
for _, f := range filters {
gfilters = append(gfilters, images.ToFilters(f)...)
}
var (
targetFormat images.Format
configSet bool
)
for _, f := range gfilters {
f = images.UnwrapFilter(f)
if specProvider, ok := f.(images.ImageProcessSpecProvider); ok {
action, options := i.resolveActionOptions(specProvider.ImageProcessSpec())
var err error
conf, err = images.DecodeImageConfig(action, options, i.Proc.Cfg, i.Format)
if err != nil {
return nil, err
}
configSet = true
if conf.TargetFormat != 0 {
targetFormat = conf.TargetFormat
// We only support one target format, but prefer the last one,
// so we keep going.
}
}
}
if !configSet {
conf = images.GetDefaultImageConfig("filter", i.Proc.Cfg)
}
conf.Action = "filter"
conf.Key = identity.HashString(gfilters)
conf.TargetFormat = targetFormat
if conf.TargetFormat == 0 {
conf.TargetFormat = i.Format
}
return i.doWithImageConfig(conf, func(src image.Image) (image.Image, error) {
var filters []gift.Filter
for _, f := range gfilters {
f = images.UnwrapFilter(f)
if specProvider, ok := f.(images.ImageProcessSpecProvider); ok {
processSpec := specProvider.ImageProcessSpec()
action, options := i.resolveActionOptions(processSpec)
conf, err := images.DecodeImageConfig(action, options, i.Proc.Cfg, i.Format)
if err != nil {
return nil, err
}
pFilters, err := i.Proc.FiltersFromConfig(src, conf)
if err != nil {
return nil, err
}
filters = append(filters, pFilters...)
} else if orientationProvider, ok := f.(images.ImageFilterFromOrientationProvider); ok {
tf := orientationProvider.AutoOrient(i.Exif())
if tf != nil {
filters = append(filters, tf)
}
} else {
filters = append(filters, f)
}
}
return i.Proc.Filter(src, filters...)
})
}
func (i *imageResource) resolveActionOptions(spec string) (string, []string) {
var action string
options := strings.Fields(spec)
for i, p := range options {
if hstrings.InSlicEqualFold(imageActions, p) {
action = p
options = append(options[:i], options[i+1:]...)
break
}
}
return action, options
}
func (i *imageResource) processActionSpec(action, spec string) (images.ImageResource, error) {
return i.processActionOptions(action, strings.Fields(spec))
}
func (i *imageResource) processActionOptions(action string, options []string) (images.ImageResource, error) {
conf, err := images.DecodeImageConfig(action, options, i.Proc.Cfg, i.Format)
if err != nil {
return nil, err
}
img, err := i.doWithImageConfig(conf, func(src image.Image) (image.Image, error) {
return i.Proc.ApplyFiltersFromConfig(src, conf)
})
if err != nil {
return nil, err
}
if action == images.ActionFill {
if conf.Anchor == 0 && img.Width() == 0 || img.Height() == 0 {
// See https://github.com/gohugoio/hugo/issues/7955
// Smartcrop fails silently in some rare cases.
// Fall back to a center fill.
conf.Anchor = gift.CenterAnchor
conf.AnchorStr = "center"
return i.doWithImageConfig(conf, func(src image.Image) (image.Image, error) {
return i.Proc.ApplyFiltersFromConfig(src, conf)
})
}
}
return img, nil
}
// Serialize image processing. The imaging library spins up its own set of Go routines,
// so there is not much to gain from adding more load to the mix. That
// can even have negative effect in low resource scenarios.
// Note that this only effects the non-cached scenario. Once the processed
// image is written to disk, everything is fast, fast fast.
const imageProcWorkers = 1
var imageProcSem = make(chan bool, imageProcWorkers)
func (i *imageResource) doWithImageConfig(conf images.ImageConfig, f func(src image.Image) (image.Image, error)) (images.ImageResource, error) {
img, err := i.getSpec().ImageCache.getOrCreate(i, conf, func() (*imageResource, image.Image, error) {
imageProcSem <- true
defer func() {
<-imageProcSem
}()
src, err := i.DecodeImage()
if err != nil {
return nil, nil, &os.PathError{Op: conf.Action, Path: i.TargetPath(), Err: err}
}
converted, err := f(src)
if err != nil {
return nil, nil, &os.PathError{Op: conf.Action, Path: i.TargetPath(), Err: err}
}
hasAlpha := !images.IsOpaque(converted)
shouldFill := conf.BgColor != nil && hasAlpha
shouldFill = shouldFill || (!conf.TargetFormat.SupportsTransparency() && hasAlpha)
var bgColor color.Color
if shouldFill {
bgColor = conf.BgColor
if bgColor == nil {
bgColor = i.Proc.Cfg.Config.BgColor
}
tmp := image.NewRGBA(converted.Bounds())
draw.Draw(tmp, tmp.Bounds(), image.NewUniform(bgColor), image.Point{}, draw.Src)
draw.Draw(tmp, tmp.Bounds(), converted, converted.Bounds().Min, draw.Over)
converted = tmp
}
if conf.TargetFormat == images.PNG {
// Apply the colour palette from the source
if paletted, ok := src.(*image.Paletted); ok {
palette := paletted.Palette
if bgColor != nil && len(palette) < 256 {
palette = images.AddColorToPalette(bgColor, palette)
} else if bgColor != nil {
images.ReplaceColorInPalette(bgColor, palette)
}
tmp := image.NewPaletted(converted.Bounds(), palette)
draw.FloydSteinberg.Draw(tmp, tmp.Bounds(), converted, converted.Bounds().Min)
converted = tmp
}
}
ci := i.clone(converted)
targetPath := i.relTargetPathFromConfig(conf)
ci.setTargetPath(targetPath)
ci.Format = conf.TargetFormat
ci.setMediaType(conf.TargetFormat.MediaType())
return ci, converted, nil
})
if err != nil {
return nil, err
}
return img, nil
}
type giphy struct {
image.Image
gif *gif.GIF
}
func (g *giphy) GIF() *gif.GIF {
return g.gif
}
// DecodeImage decodes the image source into an Image.
// This for internal use only.
func (i *imageResource) DecodeImage() (image.Image, error) {
f, err := i.ReadSeekCloser()
if err != nil {
return nil, fmt.Errorf("failed to open image for decode: %w", err)
}
defer f.Close()
if i.Format == images.GIF {
g, err := gif.DecodeAll(f)
if err != nil {
return nil, fmt.Errorf("failed to decode gif: %w", err)
}
return &giphy{gif: g, Image: g.Image[0]}, nil
}
img, _, err := image.Decode(f)
return img, err
}
func (i *imageResource) clone(img image.Image) *imageResource {
spec := i.baseResource.Clone().(baseResource)
var image *images.Image
if img != nil {
image = i.WithImage(img)
} else {
image = i.WithSpec(spec)
}
return &imageResource{
Image: image,
root: i.root,
baseResource: spec,
}
}
func (i *imageResource) getImageMetaCacheTargetPath() string {
const imageMetaVersionNumber = 1 // Increment to invalidate the meta cache
cfgHash := i.getSpec().imaging.Cfg.SourceHash
df := i.getResourcePaths()
p1, _ := paths.FileAndExt(df.File)
h := i.hash()
idStr := identity.HashString(h, i.size(), imageMetaVersionNumber, cfgHash)
df.File = fmt.Sprintf("%s_%s.json", p1, idStr)
return df.TargetPath()
}
func (i *imageResource) relTargetPathFromConfig(conf images.ImageConfig) internal.ResourcePaths {
p1, p2 := paths.FileAndExt(i.getResourcePaths().File)
if conf.TargetFormat != i.Format {
p2 = conf.TargetFormat.DefaultExtension()
}
h := i.hash()
idStr := fmt.Sprintf("_hu%s_%d", h, i.size())
// Do not change for no good reason.
const md5Threshold = 100
key := conf.GetKey(i.Format)
// It is useful to have the key in clear text, but when nesting transforms, it
// can easily be too long to read, and maybe even too long
// for the different OSes to handle.
if len(p1)+len(idStr)+len(p2) > md5Threshold {
key = helpers.MD5String(p1 + key + p2)
huIdx := strings.Index(p1, "_hu")
if huIdx != -1 {
p1 = p1[:huIdx]
} else {
// This started out as a very long file name. Making it even longer
// could melt ice in the Arctic.
p1 = ""
}
} else if strings.Contains(p1, idStr) {
// On scaling an already scaled image, we get the file info from the original.
// Repeating the same info in the filename makes it stuttery for no good reason.
idStr = ""
}
rp := i.getResourcePaths()
rp.File = fmt.Sprintf("%s%s_%s%s", p1, idStr, key, p2)
return rp
}