From 38edd2a01c36f9be5e53bf5dea9c1aef72824030 Mon Sep 17 00:00:00 2001 From: Brandon Rozek Date: Tue, 31 Mar 2020 09:26:32 -0400 Subject: [PATCH] Added import statement --- content/blog/pymemoization.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/content/blog/pymemoization.md b/content/blog/pymemoization.md index b6f723c..417c0ae 100644 --- a/content/blog/pymemoization.md +++ b/content/blog/pymemoization.md @@ -8,6 +8,8 @@ tags: ["python"] There is often a trade-off when it comes to efficiency of CPU vs memory usage. In this post, I will show how the [`lru_cache`](https://docs.python.org/3/library/functools.html#functools.lru_cache) decorator can cache results of a function call for quicker future lookup. ```python +from functools import lru_cache + @lru_cache(maxsize=2**7) def fib(n): if n == 1: @@ -19,4 +21,4 @@ def fib(n): In the code above, `maxsize` indicates the number of calls to store. Setting it to `None` will make it so that there is no upper bound. The documentation recommends setting it equal to a power of two. -Do note though that `lru_cache` does not make the execution of the lines in the function faster. It only stores the results of the function in a dictionary. \ No newline at end of file +Do note though that `lru_cache` does not make the execution of the lines in the function faster. It only stores the results of the function in a dictionary.