mirror of
https://github.com/Brandon-Rozek/website.git
synced 2025-02-02 16:01:29 +00:00
Added import statement
This commit is contained in:
parent
a650fc940e
commit
38edd2a01c
1 changed files with 3 additions and 1 deletions
|
@ -8,6 +8,8 @@ tags: ["python"]
|
|||
There is often a trade-off when it comes to efficiency of CPU vs memory usage. In this post, I will show how the [`lru_cache`](https://docs.python.org/3/library/functools.html#functools.lru_cache) decorator can cache results of a function call for quicker future lookup.
|
||||
|
||||
```python
|
||||
from functools import lru_cache
|
||||
|
||||
@lru_cache(maxsize=2**7)
|
||||
def fib(n):
|
||||
if n == 1:
|
||||
|
@ -19,4 +21,4 @@ def fib(n):
|
|||
|
||||
In the code above, `maxsize` indicates the number of calls to store. Setting it to `None` will make it so that there is no upper bound. The documentation recommends setting it equal to a power of two.
|
||||
|
||||
Do note though that `lru_cache` does not make the execution of the lines in the function faster. It only stores the results of the function in a dictionary.
|
||||
Do note though that `lru_cache` does not make the execution of the lines in the function faster. It only stores the results of the function in a dictionary.
|
||||
|
|
Loading…
Reference in a new issue