2018-10-17 06:57:09 -04:00
|
|
|
// Copyright 2018 The Hugo Authors. All rights reserved.
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
//
|
2015-11-23 22:16:36 -05:00
|
|
|
// Licensed under the Apache License, Version 2.0 (the "License");
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
// you may not use this file except in compliance with the License.
|
|
|
|
// You may obtain a copy of the License at
|
2015-11-23 22:16:36 -05:00
|
|
|
// http://www.apache.org/licenses/LICENSE-2.0
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
//
|
|
|
|
// Unless required by applicable law or agreed to in writing, software
|
|
|
|
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
|
|
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
|
|
// See the License for the specific language governing permissions and
|
|
|
|
// limitations under the License.
|
|
|
|
|
2018-10-17 06:57:09 -04:00
|
|
|
package pageparser
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
|
|
|
|
import (
|
2018-10-18 03:04:48 -04:00
|
|
|
"bytes"
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
"fmt"
|
|
|
|
"unicode"
|
|
|
|
"unicode/utf8"
|
|
|
|
)
|
|
|
|
|
2018-10-17 07:16:45 -04:00
|
|
|
const eof = -1
|
2018-10-17 06:57:09 -04:00
|
|
|
|
2018-10-17 07:16:45 -04:00
|
|
|
// returns the next state in scanner.
|
|
|
|
type stateFunc func(*pageLexer) stateFunc
|
2018-10-17 06:57:09 -04:00
|
|
|
|
2018-10-17 07:16:45 -04:00
|
|
|
type pageLexer struct {
|
2018-10-18 03:04:48 -04:00
|
|
|
input []byte
|
2018-10-17 07:48:55 -04:00
|
|
|
stateStart stateFunc
|
|
|
|
state stateFunc
|
2018-10-20 11:38:49 -04:00
|
|
|
pos int // input position
|
|
|
|
start int // item start position
|
|
|
|
width int // width of last element
|
2018-10-17 07:48:55 -04:00
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 15:03:23 -05:00
|
|
|
// Contains lexers for shortcodes and other main section
|
|
|
|
// elements.
|
|
|
|
sectionHandlers *sectionHandlers
|
|
|
|
|
|
|
|
cfg Config
|
|
|
|
|
2018-10-23 08:37:09 -04:00
|
|
|
// The summary divider to look for.
|
|
|
|
summaryDivider []byte
|
2018-10-18 04:21:23 -04:00
|
|
|
// Set when we have parsed any summary divider
|
|
|
|
summaryDividerChecked bool
|
2018-11-28 04:21:54 -05:00
|
|
|
// Whether we're in a HTML comment.
|
|
|
|
isInHTMLComment bool
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
|
2018-10-17 07:16:45 -04:00
|
|
|
lexerShortcodeState
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
|
|
|
|
// items delivered to client
|
2018-10-18 04:21:23 -04:00
|
|
|
items Items
|
|
|
|
}
|
|
|
|
|
|
|
|
// Implement the Result interface
|
|
|
|
func (l *pageLexer) Iterator() *Iterator {
|
2022-07-07 10:11:47 -04:00
|
|
|
return NewIterator(l.items)
|
2018-10-18 04:21:23 -04:00
|
|
|
}
|
|
|
|
|
|
|
|
func (l *pageLexer) Input() []byte {
|
|
|
|
return l.input
|
2018-10-17 06:57:09 -04:00
|
|
|
}
|
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 15:03:23 -05:00
|
|
|
type Config struct {
|
|
|
|
EnableEmoji bool
|
|
|
|
}
|
|
|
|
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
// note: the input position here is normally 0 (start), but
|
|
|
|
// can be set if position of first shortcode is known
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 15:03:23 -05:00
|
|
|
func newPageLexer(input []byte, stateStart stateFunc, cfg Config) *pageLexer {
|
2018-10-17 07:16:45 -04:00
|
|
|
lexer := &pageLexer{
|
2018-10-17 07:48:55 -04:00
|
|
|
input: input,
|
|
|
|
stateStart: stateStart,
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 15:03:23 -05:00
|
|
|
cfg: cfg,
|
2018-10-17 07:16:45 -04:00
|
|
|
lexerShortcodeState: lexerShortcodeState{
|
|
|
|
currLeftDelimItem: tLeftDelimScNoMarkup,
|
|
|
|
currRightDelimItem: tRightDelimScNoMarkup,
|
|
|
|
openShortcodes: make(map[string]bool),
|
|
|
|
},
|
|
|
|
items: make([]Item, 0, 5),
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
}
|
2018-10-17 07:16:45 -04:00
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 15:03:23 -05:00
|
|
|
lexer.sectionHandlers = createSectionHandlers(lexer)
|
|
|
|
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
return lexer
|
|
|
|
}
|
|
|
|
|
|
|
|
// main loop
|
2018-10-17 07:16:45 -04:00
|
|
|
func (l *pageLexer) run() *pageLexer {
|
2018-10-17 07:48:55 -04:00
|
|
|
for l.state = l.stateStart; l.state != nil; {
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
l.state = l.state(l)
|
|
|
|
}
|
2018-10-17 07:16:45 -04:00
|
|
|
return l
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
}
|
|
|
|
|
2018-10-17 07:48:55 -04:00
|
|
|
// Page syntax
|
2018-10-18 03:04:48 -04:00
|
|
|
var (
|
2018-10-18 04:21:23 -04:00
|
|
|
byteOrderMark = '\ufeff'
|
2018-10-18 03:04:48 -04:00
|
|
|
summaryDivider = []byte("<!--more-->")
|
|
|
|
summaryDividerOrg = []byte("# more")
|
|
|
|
delimTOML = []byte("+++")
|
|
|
|
delimYAML = []byte("---")
|
|
|
|
delimOrg = []byte("#+")
|
2018-10-23 08:37:09 -04:00
|
|
|
htmlCommentStart = []byte("<!--")
|
2018-11-28 04:21:54 -05:00
|
|
|
htmlCommentEnd = []byte("-->")
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 15:03:23 -05:00
|
|
|
|
|
|
|
emojiDelim = byte(':')
|
2018-10-17 07:48:55 -04:00
|
|
|
)
|
|
|
|
|
2018-10-17 07:16:45 -04:00
|
|
|
func (l *pageLexer) next() rune {
|
2019-08-02 11:32:23 -04:00
|
|
|
if l.pos >= len(l.input) {
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
l.width = 0
|
|
|
|
return eof
|
|
|
|
}
|
|
|
|
|
2018-10-18 03:04:48 -04:00
|
|
|
runeValue, runeWidth := utf8.DecodeRune(l.input[l.pos:])
|
2018-10-20 11:38:49 -04:00
|
|
|
l.width = runeWidth
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
l.pos += l.width
|
2022-05-28 07:18:50 -04:00
|
|
|
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
return runeValue
|
|
|
|
}
|
|
|
|
|
|
|
|
// peek, but no consume
|
2018-10-17 07:16:45 -04:00
|
|
|
func (l *pageLexer) peek() rune {
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
r := l.next()
|
|
|
|
l.backup()
|
|
|
|
return r
|
|
|
|
}
|
|
|
|
|
|
|
|
// steps back one
|
2018-10-17 07:16:45 -04:00
|
|
|
func (l *pageLexer) backup() {
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
l.pos -= l.width
|
|
|
|
}
|
|
|
|
|
2022-07-07 10:11:47 -04:00
|
|
|
func (l *pageLexer) append(item Item) {
|
|
|
|
if item.Pos() < len(l.input) {
|
|
|
|
item.firstByte = l.input[item.Pos()]
|
|
|
|
}
|
|
|
|
l.items = append(l.items, item)
|
|
|
|
}
|
|
|
|
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
// sends an item back to the client.
|
2018-10-18 04:21:23 -04:00
|
|
|
func (l *pageLexer) emit(t ItemType) {
|
2022-05-28 07:18:50 -04:00
|
|
|
defer func() {
|
|
|
|
l.start = l.pos
|
|
|
|
}()
|
|
|
|
|
|
|
|
if t == tText {
|
|
|
|
// Identify any trailing whitespace/intendation.
|
|
|
|
// We currently only care about the last one.
|
|
|
|
for i := l.pos - 1; i >= l.start; i-- {
|
|
|
|
b := l.input[i]
|
|
|
|
if b != ' ' && b != '\t' && b != '\r' && b != '\n' {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
if i == l.start && b != '\n' {
|
2022-07-07 10:11:47 -04:00
|
|
|
l.append(Item{Type: tIndentation, low: l.start, high: l.pos})
|
2022-05-28 07:18:50 -04:00
|
|
|
return
|
|
|
|
} else if b == '\n' && i < l.pos-1 {
|
2022-07-07 10:11:47 -04:00
|
|
|
l.append(Item{Type: t, low: l.start, high: i + 1})
|
|
|
|
l.append(Item{Type: tIndentation, low: i + 1, high: l.pos})
|
2022-05-28 07:18:50 -04:00
|
|
|
return
|
|
|
|
} else if b == '\n' && i == l.pos-1 {
|
|
|
|
break
|
|
|
|
}
|
|
|
|
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-07-07 10:11:47 -04:00
|
|
|
l.append(Item{Type: t, low: l.start, high: l.pos})
|
2022-05-28 07:18:50 -04:00
|
|
|
|
2019-09-29 08:51:51 -04:00
|
|
|
}
|
|
|
|
|
|
|
|
// sends a string item back to the client.
|
|
|
|
func (l *pageLexer) emitString(t ItemType) {
|
2022-07-07 10:11:47 -04:00
|
|
|
l.append(Item{Type: t, low: l.start, high: l.pos, isString: true})
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
l.start = l.pos
|
|
|
|
}
|
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 15:03:23 -05:00
|
|
|
func (l *pageLexer) isEOF() bool {
|
|
|
|
return l.pos >= len(l.input)
|
|
|
|
}
|
|
|
|
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
// special case, do not send '\\' back to client
|
2019-09-29 08:51:51 -04:00
|
|
|
func (l *pageLexer) ignoreEscapesAndEmit(t ItemType, isString bool) {
|
2022-07-07 10:11:47 -04:00
|
|
|
i := l.start
|
|
|
|
k := i
|
|
|
|
|
|
|
|
var segments []lowHigh
|
|
|
|
|
|
|
|
for i < l.pos {
|
|
|
|
r, w := utf8.DecodeRune(l.input[i:l.pos])
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
if r == '\\' {
|
2022-07-07 10:11:47 -04:00
|
|
|
if i > k {
|
|
|
|
segments = append(segments, lowHigh{k, i})
|
|
|
|
}
|
|
|
|
l.append(Item{Type: TypeIgnore, low: i, high: i + w})
|
|
|
|
k = i + w
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
}
|
2022-07-07 10:11:47 -04:00
|
|
|
i += w
|
|
|
|
}
|
|
|
|
|
|
|
|
if k < l.pos {
|
|
|
|
segments = append(segments, lowHigh{k, l.pos})
|
|
|
|
}
|
|
|
|
|
|
|
|
if len(segments) > 0 {
|
|
|
|
l.append(Item{Type: t, segments: segments})
|
|
|
|
}
|
|
|
|
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
l.start = l.pos
|
2022-07-07 10:11:47 -04:00
|
|
|
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
}
|
|
|
|
|
|
|
|
// gets the current value (for debugging and error handling)
|
2018-10-18 03:04:48 -04:00
|
|
|
func (l *pageLexer) current() []byte {
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
return l.input[l.start:l.pos]
|
|
|
|
}
|
|
|
|
|
|
|
|
// ignore current element
|
2018-10-17 07:16:45 -04:00
|
|
|
func (l *pageLexer) ignore() {
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
l.start = l.pos
|
|
|
|
}
|
|
|
|
|
2018-10-18 03:04:48 -04:00
|
|
|
var lf = []byte("\n")
|
|
|
|
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
// nil terminates the parser
|
2022-03-17 17:03:27 -04:00
|
|
|
func (l *pageLexer) errorf(format string, args ...any) stateFunc {
|
2022-07-07 10:11:47 -04:00
|
|
|
l.append(Item{Type: tError, Err: fmt.Errorf(format, args...)})
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2018-10-17 07:48:55 -04:00
|
|
|
func (l *pageLexer) consumeCRLF() bool {
|
|
|
|
var consumed bool
|
|
|
|
for _, r := range crLf {
|
|
|
|
if l.next() != r {
|
|
|
|
l.backup()
|
|
|
|
} else {
|
|
|
|
consumed = true
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return consumed
|
|
|
|
}
|
|
|
|
|
2018-11-28 04:21:54 -05:00
|
|
|
func (l *pageLexer) consumeToNextLine() {
|
|
|
|
for {
|
|
|
|
r := l.next()
|
|
|
|
if r == eof || isEndOfLine(r) {
|
|
|
|
return
|
2019-09-29 08:51:51 -04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func (l *pageLexer) consumeToSpace() {
|
|
|
|
for {
|
|
|
|
r := l.next()
|
|
|
|
if r == eof || unicode.IsSpace(r) {
|
|
|
|
l.backup()
|
|
|
|
return
|
2018-11-28 04:21:54 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-10-30 15:24:34 -04:00
|
|
|
func (l *pageLexer) consumeSpace() {
|
|
|
|
for {
|
|
|
|
r := l.next()
|
|
|
|
if r == eof || !unicode.IsSpace(r) {
|
|
|
|
l.backup()
|
|
|
|
return
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 15:03:23 -05:00
|
|
|
// lex a string starting at ":"
|
|
|
|
func lexEmoji(l *pageLexer) stateFunc {
|
|
|
|
pos := l.pos + 1
|
|
|
|
valid := false
|
|
|
|
|
|
|
|
for i := pos; i < len(l.input); i++ {
|
|
|
|
if i > pos && l.input[i] == emojiDelim {
|
|
|
|
pos = i + 1
|
|
|
|
valid = true
|
|
|
|
break
|
|
|
|
}
|
|
|
|
r, _ := utf8.DecodeRune(l.input[i:])
|
2019-01-29 07:38:36 -05:00
|
|
|
if !(isAlphaNumericOrHyphen(r) || r == '+') {
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 15:03:23 -05:00
|
|
|
break
|
|
|
|
}
|
2018-11-28 04:21:54 -05:00
|
|
|
}
|
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 15:03:23 -05:00
|
|
|
if valid {
|
|
|
|
l.pos = pos
|
|
|
|
l.emit(TypeEmoji)
|
|
|
|
} else {
|
|
|
|
l.pos++
|
|
|
|
l.emit(tText)
|
|
|
|
}
|
2018-10-23 08:37:09 -04:00
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 15:03:23 -05:00
|
|
|
return lexMainSection
|
|
|
|
}
|
|
|
|
|
|
|
|
type sectionHandlers struct {
|
|
|
|
l *pageLexer
|
|
|
|
|
|
|
|
// Set when none of the sections are found so we
|
|
|
|
// can safely stop looking and skip to the end.
|
|
|
|
skipAll bool
|
|
|
|
|
|
|
|
handlers []*sectionHandler
|
|
|
|
skipIndexes []int
|
|
|
|
}
|
|
|
|
|
|
|
|
func (s *sectionHandlers) skip() int {
|
|
|
|
if s.skipAll {
|
|
|
|
return -1
|
2018-10-18 04:21:23 -04:00
|
|
|
}
|
2018-10-23 08:37:09 -04:00
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 15:03:23 -05:00
|
|
|
s.skipIndexes = s.skipIndexes[:0]
|
|
|
|
var shouldSkip bool
|
|
|
|
for _, skipper := range s.handlers {
|
|
|
|
idx := skipper.skip()
|
|
|
|
if idx != -1 {
|
|
|
|
shouldSkip = true
|
|
|
|
s.skipIndexes = append(s.skipIndexes, idx)
|
|
|
|
}
|
|
|
|
}
|
2018-10-23 08:37:09 -04:00
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 15:03:23 -05:00
|
|
|
if !shouldSkip {
|
|
|
|
s.skipAll = true
|
|
|
|
return -1
|
2018-10-18 04:21:23 -04:00
|
|
|
}
|
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 15:03:23 -05:00
|
|
|
return minIndex(s.skipIndexes...)
|
|
|
|
}
|
|
|
|
|
|
|
|
func createSectionHandlers(l *pageLexer) *sectionHandlers {
|
|
|
|
shortCodeHandler := §ionHandler{
|
|
|
|
l: l,
|
|
|
|
skipFunc: func(l *pageLexer) int {
|
|
|
|
return l.index(leftDelimSc)
|
|
|
|
},
|
|
|
|
lexFunc: func(origin stateFunc, l *pageLexer) (stateFunc, bool) {
|
|
|
|
if !l.isShortCodeStart() {
|
|
|
|
return origin, false
|
|
|
|
}
|
|
|
|
|
2018-11-26 05:01:27 -05:00
|
|
|
if l.isInline {
|
|
|
|
// If we're inside an inline shortcode, the only valid shortcode markup is
|
|
|
|
// the markup which closes it.
|
|
|
|
b := l.input[l.pos+3:]
|
|
|
|
end := indexNonWhiteSpace(b, '/')
|
|
|
|
if end != len(l.input)-1 {
|
|
|
|
b = bytes.TrimSpace(b[end+1:])
|
|
|
|
if end == -1 || !bytes.HasPrefix(b, []byte(l.currShortcodeName+" ")) {
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 15:03:23 -05:00
|
|
|
return l.errorf("inline shortcodes do not support nesting"), true
|
2018-11-26 05:01:27 -05:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2018-10-18 04:21:23 -04:00
|
|
|
if l.hasPrefix(leftDelimScWithMarkup) {
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
l.currLeftDelimItem = tLeftDelimScWithMarkup
|
|
|
|
l.currRightDelimItem = tRightDelimScWithMarkup
|
|
|
|
} else {
|
|
|
|
l.currLeftDelimItem = tLeftDelimScNoMarkup
|
|
|
|
l.currRightDelimItem = tRightDelimScNoMarkup
|
|
|
|
}
|
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 15:03:23 -05:00
|
|
|
return lexShortcodeLeftDelim, true
|
|
|
|
},
|
|
|
|
}
|
|
|
|
|
|
|
|
summaryDividerHandler := §ionHandler{
|
|
|
|
l: l,
|
|
|
|
skipFunc: func(l *pageLexer) int {
|
|
|
|
if l.summaryDividerChecked || l.summaryDivider == nil {
|
|
|
|
return -1
|
2018-10-17 07:48:55 -04:00
|
|
|
}
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 15:03:23 -05:00
|
|
|
return l.index(l.summaryDivider)
|
|
|
|
},
|
|
|
|
lexFunc: func(origin stateFunc, l *pageLexer) (stateFunc, bool) {
|
|
|
|
if !l.hasPrefix(l.summaryDivider) {
|
|
|
|
return origin, false
|
|
|
|
}
|
|
|
|
|
|
|
|
l.summaryDividerChecked = true
|
|
|
|
l.pos += len(l.summaryDivider)
|
|
|
|
// This makes it a little easier to reason about later.
|
|
|
|
l.consumeSpace()
|
|
|
|
l.emit(TypeLeadSummaryDivider)
|
|
|
|
|
|
|
|
return origin, true
|
|
|
|
},
|
|
|
|
}
|
|
|
|
|
|
|
|
handlers := []*sectionHandler{shortCodeHandler, summaryDividerHandler}
|
|
|
|
|
|
|
|
if l.cfg.EnableEmoji {
|
|
|
|
emojiHandler := §ionHandler{
|
|
|
|
l: l,
|
|
|
|
skipFunc: func(l *pageLexer) int {
|
|
|
|
return l.indexByte(emojiDelim)
|
|
|
|
},
|
|
|
|
lexFunc: func(origin stateFunc, l *pageLexer) (stateFunc, bool) {
|
|
|
|
return lexEmoji, true
|
|
|
|
},
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
}
|
2018-10-17 07:48:55 -04:00
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 15:03:23 -05:00
|
|
|
handlers = append(handlers, emojiHandler)
|
|
|
|
}
|
|
|
|
|
|
|
|
return §ionHandlers{
|
|
|
|
l: l,
|
|
|
|
handlers: handlers,
|
|
|
|
skipIndexes: make([]int, len(handlers)),
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
func (s *sectionHandlers) lex(origin stateFunc) stateFunc {
|
|
|
|
if s.skipAll {
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
|
|
|
if s.l.pos > s.l.start {
|
|
|
|
s.l.emit(tText)
|
|
|
|
}
|
|
|
|
|
|
|
|
for _, handler := range s.handlers {
|
|
|
|
if handler.skipAll {
|
|
|
|
continue
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
}
|
2018-10-17 07:48:55 -04:00
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 15:03:23 -05:00
|
|
|
next, handled := handler.lexFunc(origin, handler.l)
|
|
|
|
if next == nil || handled {
|
|
|
|
return next
|
|
|
|
}
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
}
|
2018-10-17 07:48:55 -04:00
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 15:03:23 -05:00
|
|
|
// Not handled by the above.
|
|
|
|
s.l.pos++
|
|
|
|
|
|
|
|
return origin
|
|
|
|
}
|
|
|
|
|
|
|
|
type sectionHandler struct {
|
|
|
|
l *pageLexer
|
|
|
|
|
|
|
|
// No more sections of this type.
|
|
|
|
skipAll bool
|
|
|
|
|
|
|
|
// Returns the index of the next match, -1 if none found.
|
|
|
|
skipFunc func(l *pageLexer) int
|
|
|
|
|
|
|
|
// Lex lexes the current section and returns the next state func and
|
|
|
|
// a bool telling if this section was handled.
|
|
|
|
// Note that returning nil as the next state will terminate the
|
|
|
|
// lexer.
|
|
|
|
lexFunc func(origin stateFunc, l *pageLexer) (stateFunc, bool)
|
|
|
|
}
|
|
|
|
|
|
|
|
func (s *sectionHandler) skip() int {
|
|
|
|
if s.skipAll {
|
|
|
|
return -1
|
|
|
|
}
|
|
|
|
|
|
|
|
idx := s.skipFunc(s.l)
|
|
|
|
if idx == -1 {
|
|
|
|
s.skipAll = true
|
|
|
|
}
|
|
|
|
return idx
|
|
|
|
}
|
|
|
|
|
|
|
|
func lexMainSection(l *pageLexer) stateFunc {
|
|
|
|
if l.isEOF() {
|
|
|
|
return lexDone
|
|
|
|
}
|
|
|
|
|
|
|
|
if l.isInHTMLComment {
|
2020-12-16 06:11:32 -05:00
|
|
|
return lexEndFrontMatterHTMLComment
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 15:03:23 -05:00
|
|
|
}
|
|
|
|
|
|
|
|
// Fast forward as far as possible.
|
|
|
|
skip := l.sectionHandlers.skip()
|
|
|
|
|
|
|
|
if skip == -1 {
|
|
|
|
l.pos = len(l.input)
|
|
|
|
return lexDone
|
|
|
|
} else if skip > 0 {
|
|
|
|
l.pos += skip
|
|
|
|
}
|
|
|
|
|
|
|
|
next := l.sectionHandlers.lex(lexMainSection)
|
|
|
|
if next != nil {
|
|
|
|
return next
|
|
|
|
}
|
|
|
|
|
|
|
|
l.pos = len(l.input)
|
2018-10-17 07:48:55 -04:00
|
|
|
return lexDone
|
|
|
|
}
|
|
|
|
|
|
|
|
func lexDone(l *pageLexer) stateFunc {
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
// Done!
|
|
|
|
if l.pos > l.start {
|
|
|
|
l.emit(tText)
|
|
|
|
}
|
|
|
|
l.emit(tEOF)
|
|
|
|
return nil
|
|
|
|
}
|
|
|
|
|
2018-10-18 04:21:23 -04:00
|
|
|
func (l *pageLexer) printCurrentInput() {
|
|
|
|
fmt.Printf("input[%d:]: %q", l.pos, string(l.input[l.pos:]))
|
|
|
|
}
|
|
|
|
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
// state helpers
|
|
|
|
|
2018-10-18 04:21:23 -04:00
|
|
|
func (l *pageLexer) index(sep []byte) int {
|
|
|
|
return bytes.Index(l.input[l.pos:], sep)
|
|
|
|
}
|
|
|
|
|
Move the emoji parsing to pageparser
This avoids double parsing the page content when `enableEmoji=true`.
This commit also adds some general improvements to the parser, making it in general much faster:
```bash
benchmark old ns/op new ns/op delta
BenchmarkShortcodeLexer-4 90258 101730 +12.71%
BenchmarkParse-4 148940 15037 -89.90%
benchmark old allocs new allocs delta
BenchmarkShortcodeLexer-4 456 700 +53.51%
BenchmarkParse-4 28 33 +17.86%
benchmark old bytes new bytes delta
BenchmarkShortcodeLexer-4 69875 81014 +15.94%
BenchmarkParse-4 8128 8304 +2.17%
```
Running some site benchmarks with Emoji support turned on:
```bash
benchmark old ns/op new ns/op delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 924556797 818115620 -11.51%
benchmark old allocs new allocs delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 4112613 4133787 +0.51%
benchmark old bytes new bytes delta
BenchmarkSiteBuilding/TOML,num_langs=3,num_pages=5000,tags_per_page=5,shortcodes,render-4 426982864 424363832 -0.61%
```
Fixes #5534
2018-12-17 15:03:23 -05:00
|
|
|
func (l *pageLexer) indexByte(sep byte) int {
|
|
|
|
return bytes.IndexByte(l.input[l.pos:], sep)
|
|
|
|
}
|
|
|
|
|
2018-10-18 04:21:23 -04:00
|
|
|
func (l *pageLexer) hasPrefix(prefix []byte) bool {
|
|
|
|
return bytes.HasPrefix(l.input[l.pos:], prefix)
|
|
|
|
}
|
|
|
|
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
// helper functions
|
|
|
|
|
2018-11-24 11:06:26 -05:00
|
|
|
// returns the min index >= 0
|
|
|
|
func minIndex(indices ...int) int {
|
2018-10-18 04:21:23 -04:00
|
|
|
min := -1
|
|
|
|
|
|
|
|
for _, j := range indices {
|
2018-11-24 11:06:26 -05:00
|
|
|
if j < 0 {
|
2018-10-18 04:21:23 -04:00
|
|
|
continue
|
|
|
|
}
|
|
|
|
if min == -1 {
|
|
|
|
min = j
|
|
|
|
} else if j < min {
|
|
|
|
min = j
|
|
|
|
}
|
|
|
|
}
|
|
|
|
return min
|
|
|
|
}
|
|
|
|
|
2018-11-26 05:01:27 -05:00
|
|
|
func indexNonWhiteSpace(s []byte, in rune) int {
|
|
|
|
idx := bytes.IndexFunc(s, func(r rune) bool {
|
|
|
|
return !unicode.IsSpace(r)
|
|
|
|
})
|
|
|
|
|
|
|
|
if idx == -1 {
|
|
|
|
return -1
|
|
|
|
}
|
|
|
|
|
|
|
|
r, _ := utf8.DecodeRune(s[idx:])
|
|
|
|
if r == in {
|
|
|
|
return idx
|
|
|
|
}
|
|
|
|
return -1
|
|
|
|
}
|
|
|
|
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
func isSpace(r rune) bool {
|
|
|
|
return r == ' ' || r == '\t'
|
|
|
|
}
|
|
|
|
|
2015-02-27 05:57:23 -05:00
|
|
|
func isAlphaNumericOrHyphen(r rune) bool {
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
// let unquoted YouTube ids as positional params slip through (they contain hyphens)
|
|
|
|
return isAlphaNumeric(r) || r == '-'
|
|
|
|
}
|
|
|
|
|
2018-10-17 07:48:55 -04:00
|
|
|
var crLf = []rune{'\r', '\n'}
|
|
|
|
|
Shortcode rewrite, take 2
This commit contains a restructuring and partial rewrite of the shortcode handling.
Prior to this commit rendering of the page content was mingled with handling of the shortcodes. This led to several oddities.
The new flow is:
1. Shortcodes are extracted from page and replaced with placeholders.
2. Shortcodes are processed and rendered
3. Page is processed
4. The placeholders are replaced with the rendered shortcodes
The handling of summaries is also made simpler by this.
This commit also introduces some other chenges:
1. distinction between shortcodes that need further processing and those who do not:
* `{{< >}}`: Typically raw HTML. Will not be processed.
* `{{% %}}`: Will be processed by the page's markup engine (Markdown or (infuture) Asciidoctor)
The above also involves a new shortcode-parser, with lexical scanning inspired by Rob Pike's talk called "Lexical Scanning in Go",
which should be easier to understand, give better error messages and perform better.
2. If you want to exclude a shortcode from being processed (for documentation etc.), the inner part of the shorcode must be commented out, i.e. `{{%/* movie 47238zzb */%}}`. See the updated shortcode section in the documentation for further examples.
The new parser supports nested shortcodes. This isn't new, but has two related design choices worth mentioning:
* The shortcodes will be rendered individually, so If both `{{< >}}` and `{{% %}}` are used in the nested hierarchy, one will be passed through the page's markdown processor, the other not.
* To avoid potential costly overhead of always looking far ahead for a possible closing tag, this implementation looks at the template itself, and is branded as a container with inner content if it contains a reference to `.Inner`
Fixes #565
Fixes #480
Fixes #461
And probably some others.
2014-10-27 16:48:30 -04:00
|
|
|
func isEndOfLine(r rune) bool {
|
|
|
|
return r == '\r' || r == '\n'
|
|
|
|
}
|
|
|
|
|
|
|
|
func isAlphaNumeric(r rune) bool {
|
|
|
|
return r == '_' || unicode.IsLetter(r) || unicode.IsDigit(r)
|
|
|
|
}
|