2018-10-17 10:57:09 +00:00
|
|
|
// Copyright 2018 The Hugo Authors. All rights reserved.
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
//
|
2015-11-24 03:16:36 +00:00
|
|
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
// you may not use this file except in compliance with the License.
|
|
|
|
// You may obtain a copy of the License at
|
2015-11-24 03:16:36 +00:00
|
|
|
// http://www.apache.org/licenses/LICENSE-2.0
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
//
|
|
|
|
// Unless required by applicable law or agreed to in writing, software
|
|
|
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
// See the License for the specific language governing permissions and
|
|
|
|
// limitations under the License.
|
|
|
|
|
2018-10-17 11:16:45 +00:00
|
|
|
// Package pageparser provides a parser for Hugo content files (Markdown, HTML etc.) in Hugo.
|
|
|
|
// This implementation is highly inspired by the great talk given by Rob Pike called "Lexical Scanning in Go"
|
|
|
|
// It's on YouTube, Google it!.
|
|
|
|
// See slides here: http://cuddle.googlecode.com/hg/talk/lex.html
|
2018-10-17 10:57:09 +00:00
|
|
|
package pageparser
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
|
|
|
|
import (
|
2018-10-18 07:04:48 +00:00
|
|
|
"bytes"
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
"fmt"
|
|
|
|
"unicode"
|
|
|
|
"unicode/utf8"
|
|
|
|
)
|
|
|
|
|
2018-10-17 11:16:45 +00:00
|
|
|
const eof = -1
|
2018-10-17 10:57:09 +00:00
|
|
|
|
2018-10-17 11:16:45 +00:00
|
|
|
// returns the next state in scanner.
|
|
|
|
type stateFunc func(*pageLexer) stateFunc
|
2018-10-17 10:57:09 +00:00
|
|
|
|
2018-10-17 11:16:45 +00:00
|
|
|
type pageLexer struct {
|
2018-10-18 07:04:48 +00:00
|
|
|
input []byte
|
2018-10-17 11:48:55 +00:00
|
|
|
stateStart stateFunc
|
|
|
|
state stateFunc
|
2018-10-20 15:38:49 +00:00
|
|
|
pos int // input position
|
|
|
|
start int // item start position
|
|
|
|
width int // width of last element
|
2018-10-17 11:48:55 +00:00
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 20:03:23 +00:00
|
|
|
// Contains lexers for shortcodes and other main section
|
|
|
|
// elements.
|
|
|
|
sectionHandlers *sectionHandlers
|
|
|
|
|
|
|
|
cfg Config
|
|
|
|
|
2018-10-23 12:37:09 +00:00
|
|
|
// The summary divider to look for.
|
|
|
|
summaryDivider []byte
|
2018-10-18 08:21:23 +00:00
|
|
|
// Set when we have parsed any summary divider
|
|
|
|
summaryDividerChecked bool
|
2018-11-28 09:21:54 +00:00
|
|
|
// Whether we're in a HTML comment.
|
|
|
|
isInHTMLComment bool
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
|
2018-10-17 11:16:45 +00:00
|
|
|
lexerShortcodeState
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
|
|
|
|
// items delivered to client
|
2018-10-18 08:21:23 +00:00
|
|
|
items Items
|
|
|
|
}
|
|
|
|
|
|
|
|
// Implement the Result interface
|
|
|
|
func (l *pageLexer) Iterator() *Iterator {
|
|
|
|
return l.newIterator()
|
|
|
|
}
|
|
|
|
|
|
|
|
func (l *pageLexer) Input() []byte {
|
|
|
|
return l.input
|
|
|
|
|
2018-10-17 10:57:09 +00:00
|
|
|
}
|
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 20:03:23 +00:00
|
|
|
type Config struct {
|
|
|
|
EnableEmoji bool
|
|
|
|
}
|
|
|
|
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
// note: the input position here is normally 0 (start), but
|
|
|
|
// can be set if position of first shortcode is known
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 20:03:23 +00:00
|
|
|
func newPageLexer(input []byte, stateStart stateFunc, cfg Config) *pageLexer {
|
2018-10-17 11:16:45 +00:00
|
|
|
lexer := &pageLexer{
|
2018-10-17 11:48:55 +00:00
|
|
|
input: input,
|
|
|
|
stateStart: stateStart,
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 20:03:23 +00:00
|
|
|
cfg: cfg,
|
2018-10-17 11:16:45 +00:00
|
|
|
lexerShortcodeState: lexerShortcodeState{
|
|
|
|
currLeftDelimItem: tLeftDelimScNoMarkup,
|
|
|
|
currRightDelimItem: tRightDelimScNoMarkup,
|
|
|
|
openShortcodes: make(map[string]bool),
|
|
|
|
},
|
|
|
|
items: make([]Item, 0, 5),
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
}
|
2018-10-17 11:16:45 +00:00
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 20:03:23 +00:00
|
|
|
lexer.sectionHandlers = createSectionHandlers(lexer)
|
|
|
|
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
return lexer
|
|
|
|
}
|
|
|
|
|
2018-10-18 08:21:23 +00:00
|
|
|
func (l *pageLexer) newIterator() *Iterator {
|
|
|
|
return &Iterator{l: l, lastPos: -1}
|
|
|
|
}
|
|
|
|
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
// main loop
|
2018-10-17 11:16:45 +00:00
|
|
|
func (l *pageLexer) run() *pageLexer {
|
2018-10-17 11:48:55 +00:00
|
|
|
for l.state = l.stateStart; l.state != nil; {
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
l.state = l.state(l)
|
|
|
|
}
|
2018-10-17 11:16:45 +00:00
|
|
|
return l
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
}
|
|
|
|
|
2018-10-17 11:48:55 +00:00
|
|
|
// Page syntax
|
2018-10-18 07:04:48 +00:00
|
|
|
var (
|
2018-10-18 08:21:23 +00:00
|
|
|
byteOrderMark = '\ufeff'
|
2018-10-18 07:04:48 +00:00
|
|
|
summaryDivider = []byte("<!--more-->")
|
|
|
|
summaryDividerOrg = []byte("# more")
|
|
|
|
delimTOML = []byte("+++")
|
|
|
|
delimYAML = []byte("---")
|
|
|
|
delimOrg = []byte("#+")
|
2018-10-23 12:37:09 +00:00
|
|
|
htmlCommentStart = []byte("<!--")
|
2018-11-28 09:21:54 +00:00
|
|
|
htmlCommentEnd = []byte("-->")
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 20:03:23 +00:00
|
|
|
|
|
|
|
emojiDelim = byte(':')
|
2018-10-17 11:48:55 +00:00
|
|
|
)
|
|
|
|
|
2018-10-17 11:16:45 +00:00
|
|
|
func (l *pageLexer) next() rune {
|
2019-08-02 15:32:23 +00:00
|
|
|
if l.pos >= len(l.input) {
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
l.width = 0
|
|
|
|
return eof
|
|
|
|
}
|
|
|
|
|
2018-10-18 07:04:48 +00:00
|
|
|
runeValue, runeWidth := utf8.DecodeRune(l.input[l.pos:])
|
2018-10-20 15:38:49 +00:00
|
|
|
l.width = runeWidth
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
l.pos += l.width
|
|
|
|
return runeValue
|
|
|
|
}
|
|
|
|
|
|
|
|
// peek, but no consume
|
2018-10-17 11:16:45 +00:00
|
|
|
func (l *pageLexer) peek() rune {
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
r := l.next()
|
|
|
|
l.backup()
|
|
|
|
return r
|
|
|
|
}
|
|
|
|
|
|
|
|
// steps back one
|
2018-10-17 11:16:45 +00:00
|
|
|
func (l *pageLexer) backup() {
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
l.pos -= l.width
|
|
|
|
}
|
|
|
|
|
|
|
|
// sends an item back to the client.
|
2018-10-18 08:21:23 +00:00
|
|
|
func (l *pageLexer) emit(t ItemType) {
|
2019-09-29 12:51:51 +00:00
|
|
|
l.items = append(l.items, Item{t, l.start, l.input[l.start:l.pos], false})
|
|
|
|
l.start = l.pos
|
|
|
|
}
|
|
|
|
|
|
|
|
// sends a string item back to the client.
|
|
|
|
func (l *pageLexer) emitString(t ItemType) {
|
|
|
|
l.items = append(l.items, Item{t, l.start, l.input[l.start:l.pos], true})
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
l.start = l.pos
|
|
|
|
}
|
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 20:03:23 +00:00
|
|
|
func (l *pageLexer) isEOF() bool {
|
|
|
|
return l.pos >= len(l.input)
|
|
|
|
}
|
|
|
|
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
// special case, do not send '\\' back to client
|
2019-09-29 12:51:51 +00:00
|
|
|
func (l *pageLexer) ignoreEscapesAndEmit(t ItemType, isString bool) {
|
2018-10-18 07:04:48 +00:00
|
|
|
val := bytes.Map(func(r rune) rune {
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
if r == '\\' {
|
|
|
|
return -1
|
|
|
|
}
|
|
|
|
return r
|
|
|
|
}, l.input[l.start:l.pos])
|
2019-09-29 12:51:51 +00:00
|
|
|
l.items = append(l.items, Item{t, l.start, val, isString})
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
l.start = l.pos
|
|
|
|
}
|
|
|
|
|
|
|
|
// gets the current value (for debugging and error handling)
|
2018-10-18 07:04:48 +00:00
|
|
|
func (l *pageLexer) current() []byte {
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
return l.input[l.start:l.pos]
|
|
|
|
}
|
|
|
|
|
|
|
|
// ignore current element
|
2018-10-17 11:16:45 +00:00
|
|
|
func (l *pageLexer) ignore() {
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
l.start = l.pos
|
|
|
|
}
|
|
|
|
|
2018-10-18 07:04:48 +00:00
|
|
|
var lf = []byte("\n")
|
|
|
|
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
// nil terminates the parser
|
2018-10-17 11:16:45 +00:00
|
|
|
func (l *pageLexer) errorf(format string, args ...interface{}) stateFunc {
|
2019-09-29 12:51:51 +00:00
|
|
|
l.items = append(l.items, Item{tError, l.start, []byte(fmt.Sprintf(format, args...)), true})
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2018-10-17 11:48:55 +00:00
|
|
|
func (l *pageLexer) consumeCRLF() bool {
|
|
|
|
var consumed bool
|
|
|
|
for _, r := range crLf {
|
|
|
|
if l.next() != r {
|
|
|
|
l.backup()
|
|
|
|
} else {
|
|
|
|
consumed = true
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return consumed
|
|
|
|
}
|
|
|
|
|
2018-11-28 09:21:54 +00:00
|
|
|
func (l *pageLexer) consumeToNextLine() {
|
|
|
|
for {
|
|
|
|
r := l.next()
|
|
|
|
if r == eof || isEndOfLine(r) {
|
|
|
|
return
|
2019-09-29 12:51:51 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func (l *pageLexer) consumeToSpace() {
|
|
|
|
for {
|
|
|
|
r := l.next()
|
|
|
|
if r == eof || unicode.IsSpace(r) {
|
|
|
|
l.backup()
|
|
|
|
return
|
2018-11-28 09:21:54 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-10-30 19:24:34 +00:00
|
|
|
func (l *pageLexer) consumeSpace() {
|
|
|
|
for {
|
|
|
|
r := l.next()
|
|
|
|
if r == eof || !unicode.IsSpace(r) {
|
|
|
|
l.backup()
|
|
|
|
return
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 20:03:23 +00:00
|
|
|
// lex a string starting at ":"
|
|
|
|
func lexEmoji(l *pageLexer) stateFunc {
|
|
|
|
pos := l.pos + 1
|
|
|
|
valid := false
|
|
|
|
|
|
|
|
for i := pos; i < len(l.input); i++ {
|
|
|
|
if i > pos && l.input[i] == emojiDelim {
|
|
|
|
pos = i + 1
|
|
|
|
valid = true
|
|
|
|
break
|
|
|
|
}
|
|
|
|
r, _ := utf8.DecodeRune(l.input[i:])
|
2019-01-29 12:38:36 +00:00
|
|
|
if !(isAlphaNumericOrHyphen(r) || r == '+') {
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 20:03:23 +00:00
|
|
|
break
|
|
|
|
}
|
2018-11-28 09:21:54 +00:00
|
|
|
}
|
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 20:03:23 +00:00
|
|
|
if valid {
|
|
|
|
l.pos = pos
|
|
|
|
l.emit(TypeEmoji)
|
|
|
|
} else {
|
|
|
|
l.pos++
|
|
|
|
l.emit(tText)
|
|
|
|
}
|
2018-10-23 12:37:09 +00:00
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 20:03:23 +00:00
|
|
|
return lexMainSection
|
|
|
|
}
|
|
|
|
|
|
|
|
type sectionHandlers struct {
|
|
|
|
l *pageLexer
|
|
|
|
|
|
|
|
// Set when none of the sections are found so we
|
|
|
|
// can safely stop looking and skip to the end.
|
|
|
|
skipAll bool
|
|
|
|
|
|
|
|
handlers []*sectionHandler
|
|
|
|
skipIndexes []int
|
|
|
|
}
|
|
|
|
|
|
|
|
func (s *sectionHandlers) skip() int {
|
|
|
|
if s.skipAll {
|
|
|
|
return -1
|
2018-10-18 08:21:23 +00:00
|
|
|
}
|
2018-10-23 12:37:09 +00:00
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 20:03:23 +00:00
|
|
|
s.skipIndexes = s.skipIndexes[:0]
|
|
|
|
var shouldSkip bool
|
|
|
|
for _, skipper := range s.handlers {
|
|
|
|
idx := skipper.skip()
|
|
|
|
if idx != -1 {
|
|
|
|
shouldSkip = true
|
|
|
|
s.skipIndexes = append(s.skipIndexes, idx)
|
|
|
|
}
|
|
|
|
}
|
2018-10-23 12:37:09 +00:00
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 20:03:23 +00:00
|
|
|
if !shouldSkip {
|
|
|
|
s.skipAll = true
|
|
|
|
return -1
|
2018-10-18 08:21:23 +00:00
|
|
|
}
|
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 20:03:23 +00:00
|
|
|
return minIndex(s.skipIndexes...)
|
|
|
|
}
|
|
|
|
|
|
|
|
func createSectionHandlers(l *pageLexer) *sectionHandlers {
|
|
|
|
|
|
|
|
shortCodeHandler := §ionHandler{
|
|
|
|
l: l,
|
|
|
|
skipFunc: func(l *pageLexer) int {
|
|
|
|
return l.index(leftDelimSc)
|
|
|
|
},
|
|
|
|
lexFunc: func(origin stateFunc, l *pageLexer) (stateFunc, bool) {
|
|
|
|
if !l.isShortCodeStart() {
|
|
|
|
return origin, false
|
|
|
|
}
|
|
|
|
|
2018-11-26 10:01:27 +00:00
|
|
|
if l.isInline {
|
|
|
|
// If we're inside an inline shortcode, the only valid shortcode markup is
|
|
|
|
// the markup which closes it.
|
|
|
|
b := l.input[l.pos+3:]
|
|
|
|
end := indexNonWhiteSpace(b, '/')
|
|
|
|
if end != len(l.input)-1 {
|
|
|
|
b = bytes.TrimSpace(b[end+1:])
|
|
|
|
if end == -1 || !bytes.HasPrefix(b, []byte(l.currShortcodeName+" ")) {
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 20:03:23 +00:00
|
|
|
return l.errorf("inline shortcodes do not support nesting"), true
|
2018-11-26 10:01:27 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-10-18 08:21:23 +00:00
|
|
|
if l.hasPrefix(leftDelimScWithMarkup) {
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
l.currLeftDelimItem = tLeftDelimScWithMarkup
|
|
|
|
l.currRightDelimItem = tRightDelimScWithMarkup
|
|
|
|
} else {
|
|
|
|
l.currLeftDelimItem = tLeftDelimScNoMarkup
|
|
|
|
l.currRightDelimItem = tRightDelimScNoMarkup
|
|
|
|
}
|
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 20:03:23 +00:00
|
|
|
return lexShortcodeLeftDelim, true
|
|
|
|
},
|
|
|
|
}
|
|
|
|
|
|
|
|
summaryDividerHandler := §ionHandler{
|
|
|
|
l: l,
|
|
|
|
skipFunc: func(l *pageLexer) int {
|
|
|
|
if l.summaryDividerChecked || l.summaryDivider == nil {
|
|
|
|
return -1
|
|
|
|
|
2018-10-17 11:48:55 +00:00
|
|
|
}
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 20:03:23 +00:00
|
|
|
return l.index(l.summaryDivider)
|
|
|
|
},
|
|
|
|
lexFunc: func(origin stateFunc, l *pageLexer) (stateFunc, bool) {
|
|
|
|
if !l.hasPrefix(l.summaryDivider) {
|
|
|
|
return origin, false
|
|
|
|
}
|
|
|
|
|
|
|
|
l.summaryDividerChecked = true
|
|
|
|
l.pos += len(l.summaryDivider)
|
|
|
|
// This makes it a little easier to reason about later.
|
|
|
|
l.consumeSpace()
|
|
|
|
l.emit(TypeLeadSummaryDivider)
|
|
|
|
|
|
|
|
return origin, true
|
|
|
|
|
|
|
|
},
|
|
|
|
}
|
|
|
|
|
|
|
|
handlers := []*sectionHandler{shortCodeHandler, summaryDividerHandler}
|
|
|
|
|
|
|
|
if l.cfg.EnableEmoji {
|
|
|
|
emojiHandler := §ionHandler{
|
|
|
|
l: l,
|
|
|
|
skipFunc: func(l *pageLexer) int {
|
|
|
|
return l.indexByte(emojiDelim)
|
|
|
|
},
|
|
|
|
lexFunc: func(origin stateFunc, l *pageLexer) (stateFunc, bool) {
|
|
|
|
return lexEmoji, true
|
|
|
|
},
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
}
|
2018-10-17 11:48:55 +00:00
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 20:03:23 +00:00
|
|
|
handlers = append(handlers, emojiHandler)
|
|
|
|
}
|
|
|
|
|
|
|
|
return §ionHandlers{
|
|
|
|
l: l,
|
|
|
|
handlers: handlers,
|
|
|
|
skipIndexes: make([]int, len(handlers)),
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func (s *sectionHandlers) lex(origin stateFunc) stateFunc {
|
|
|
|
if s.skipAll {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
if s.l.pos > s.l.start {
|
|
|
|
s.l.emit(tText)
|
|
|
|
}
|
|
|
|
|
|
|
|
for _, handler := range s.handlers {
|
|
|
|
if handler.skipAll {
|
|
|
|
continue
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
}
|
2018-10-17 11:48:55 +00:00
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 20:03:23 +00:00
|
|
|
next, handled := handler.lexFunc(origin, handler.l)
|
|
|
|
if next == nil || handled {
|
|
|
|
return next
|
|
|
|
}
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
}
|
2018-10-17 11:48:55 +00:00
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 20:03:23 +00:00
|
|
|
// Not handled by the above.
|
|
|
|
s.l.pos++
|
|
|
|
|
|
|
|
return origin
|
|
|
|
}
|
|
|
|
|
|
|
|
type sectionHandler struct {
|
|
|
|
l *pageLexer
|
|
|
|
|
|
|
|
// No more sections of this type.
|
|
|
|
skipAll bool
|
|
|
|
|
|
|
|
// Returns the index of the next match, -1 if none found.
|
|
|
|
skipFunc func(l *pageLexer) int
|
|
|
|
|
|
|
|
// Lex lexes the current section and returns the next state func and
|
|
|
|
// a bool telling if this section was handled.
|
|
|
|
// Note that returning nil as the next state will terminate the
|
|
|
|
// lexer.
|
|
|
|
lexFunc func(origin stateFunc, l *pageLexer) (stateFunc, bool)
|
|
|
|
}
|
|
|
|
|
|
|
|
func (s *sectionHandler) skip() int {
|
|
|
|
if s.skipAll {
|
|
|
|
return -1
|
|
|
|
}
|
|
|
|
|
|
|
|
idx := s.skipFunc(s.l)
|
|
|
|
if idx == -1 {
|
|
|
|
s.skipAll = true
|
|
|
|
}
|
|
|
|
return idx
|
|
|
|
}
|
|
|
|
|
|
|
|
func lexMainSection(l *pageLexer) stateFunc {
|
|
|
|
|
|
|
|
if l.isEOF() {
|
|
|
|
return lexDone
|
|
|
|
}
|
|
|
|
|
|
|
|
if l.isInHTMLComment {
|
|
|
|
return lexEndFromtMatterHTMLComment
|
|
|
|
}
|
|
|
|
|
|
|
|
// Fast forward as far as possible.
|
|
|
|
skip := l.sectionHandlers.skip()
|
|
|
|
|
|
|
|
if skip == -1 {
|
|
|
|
l.pos = len(l.input)
|
|
|
|
return lexDone
|
|
|
|
} else if skip > 0 {
|
|
|
|
l.pos += skip
|
|
|
|
}
|
|
|
|
|
|
|
|
next := l.sectionHandlers.lex(lexMainSection)
|
|
|
|
if next != nil {
|
|
|
|
return next
|
|
|
|
}
|
|
|
|
|
|
|
|
l.pos = len(l.input)
|
2018-10-17 11:48:55 +00:00
|
|
|
return lexDone
|
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
func lexDone(l *pageLexer) stateFunc {
|
|
|
|
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
// Done!
|
|
|
|
if l.pos > l.start {
|
|
|
|
l.emit(tText)
|
|
|
|
}
|
|
|
|
l.emit(tEOF)
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2018-10-18 08:21:23 +00:00
|
|
|
func (l *pageLexer) printCurrentInput() {
|
|
|
|
fmt.Printf("input[%d:]: %q", l.pos, string(l.input[l.pos:]))
|
|
|
|
}
|
|
|
|
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
// state helpers
|
|
|
|
|
2018-10-18 08:21:23 +00:00
|
|
|
func (l *pageLexer) index(sep []byte) int {
|
|
|
|
return bytes.Index(l.input[l.pos:], sep)
|
|
|
|
}
|
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 20:03:23 +00:00
|
|
|
func (l *pageLexer) indexByte(sep byte) int {
|
|
|
|
return bytes.IndexByte(l.input[l.pos:], sep)
|
|
|
|
}
|
|
|
|
|
2018-10-18 08:21:23 +00:00
|
|
|
func (l *pageLexer) hasPrefix(prefix []byte) bool {
|
|
|
|
return bytes.HasPrefix(l.input[l.pos:], prefix)
|
|
|
|
}
|
|
|
|
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
// helper functions
|
|
|
|
|
2018-11-24 16:06:26 +00:00
|
|
|
// returns the min index >= 0
|
|
|
|
func minIndex(indices ...int) int {
|
2018-10-18 08:21:23 +00:00
|
|
|
min := -1
|
|
|
|
|
|
|
|
for _, j := range indices {
|
2018-11-24 16:06:26 +00:00
|
|
|
if j < 0 {
|
2018-10-18 08:21:23 +00:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
if min == -1 {
|
|
|
|
min = j
|
|
|
|
} else if j < min {
|
|
|
|
min = j
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return min
|
|
|
|
}
|
|
|
|
|
2018-11-26 10:01:27 +00:00
|
|
|
func indexNonWhiteSpace(s []byte, in rune) int {
|
|
|
|
idx := bytes.IndexFunc(s, func(r rune) bool {
|
|
|
|
return !unicode.IsSpace(r)
|
|
|
|
})
|
|
|
|
|
|
|
|
if idx == -1 {
|
|
|
|
return -1
|
|
|
|
}
|
|
|
|
|
|
|
|
r, _ := utf8.DecodeRune(s[idx:])
|
|
|
|
if r == in {
|
|
|
|
return idx
|
|
|
|
}
|
|
|
|
return -1
|
|
|
|
}
|
|
|
|
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
func isSpace(r rune) bool {
|
|
|
|
return r == ' ' || r == '\t'
|
|
|
|
}
|
|
|
|
|
2015-02-27 10:57:23 +00:00
|
|
|
func isAlphaNumericOrHyphen(r rune) bool {
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
// let unquoted YouTube ids as positional params slip through (they contain hyphens)
|
|
|
|
return isAlphaNumeric(r) || r == '-'
|
|
|
|
}
|
|
|
|
|
2018-10-17 11:48:55 +00:00
|
|
|
var crLf = []rune{'\r', '\n'}
|
|
|
|
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 20:48:30 +00:00
|
|
|
func isEndOfLine(r rune) bool {
|
|
|
|
return r == '\r' || r == '\n'
|
|
|
|
}
|
|
|
|
|
|
|
|
func isAlphaNumeric(r rune) bool {
|
|
|
|
return r == '_' || unicode.IsLetter(r) || unicode.IsDigit(r)
|
|
|
|
}
|