From 5693b4a32c9d64d6a11e8656432261ba4b6908e0 Mon Sep 17 00:00:00 2001 From: Brandon Rozek Date: Mon, 30 Mar 2020 17:44:31 -0400 Subject: [PATCH] New Post --- content/blog/pymemoization.md | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) create mode 100644 content/blog/pymemoization.md diff --git a/content/blog/pymemoization.md b/content/blog/pymemoization.md new file mode 100644 index 0000000..b6f723c --- /dev/null +++ b/content/blog/pymemoization.md @@ -0,0 +1,22 @@ +--- +title: "Quick Python: Memoization" +date: 2020-03-30T17:31:55-04:00 +draft: false +tags: ["python"] +--- + +There is often a trade-off when it comes to efficiency of CPU vs memory usage. In this post, I will show how the [`lru_cache`](https://docs.python.org/3/library/functools.html#functools.lru_cache) decorator can cache results of a function call for quicker future lookup. + +```python +@lru_cache(maxsize=2**7) +def fib(n): + if n == 1: + return 0 + if n == 2: + return 1 + return f(n - 1) + f(n - 2) +``` + +In the code above, `maxsize` indicates the number of calls to store. Setting it to `None` will make it so that there is no upper bound. The documentation recommends setting it equal to a power of two. + +Do note though that `lru_cache` does not make the execution of the lines in the function faster. It only stores the results of the function in a dictionary. \ No newline at end of file