If you've been writing Python for a while, you've probably noticed that certain patterns keep showing up — caching expensive results, creating specialized versions of functions, or building cleaner decorators. Python functools is the standard library module designed to handle exactly these patterns. It gives you a set of tools for working with functions as first-class objects — storing them, wrapping them, combining them, and augmenting them in ways that make your code more expressive and easier to reason about.

The functools module ships with every Python installation and requires no third-party packages. It's a small module with a focused purpose: giving you utilities for higher-order functions. Let's walk through the most useful tools in python functools one by one, with real code and real explanations.

lru_cache — Memoization in Python Without the Boilerplate

One of the most commonly reached-for features of python functools is lru_cache. LRU stands for Least Recently Used, and what it does is cache the return values of a function based on the arguments it received. If the same arguments come in again, the function doesn't re-execute — it just returns the stored result immediately.

This is called memoization in Python, a classic technique where you trade a bit of memory for a lot of speed. Without lru_cache, you'd have to maintain your own dictionary of seen inputs and their outputs. With it, a single decorator line handles everything.

Here's a Fibonacci example that makes the impact obvious:

from functools import lru_cache

@lru_cache(maxsize=128)
def fibonacci(n):
 if n < 2:
 return n
 return fibonacci(n - 1) + fibonacci(n - 2)

print(fibonacci(36))
print(fibonacci.cache_info())
14930352
CacheInfo(hits=34, misses=37, maxsize=128, currsize=37)

Without lru_cache, computing fibonacci(36) would trigger an exponential number of recursive calls — the same subproblems computed millions of times over. With python lru_cache applied, each unique value of n is computed exactly once and stored. The cache_info() method gives you a real-time report: 34 hits means 34 calls were served straight from the cache without running the function body at all.

The maxsize parameter controls how many distinct call signatures the cache holds. When the cache is full, the least recently used entry gets evicted to make room. If you set maxsize=None, the cache grows without bound — Python 3.9+ also offers this as the simpler @cache decorator. The lru_cache decorator only works with hashable arguments, so lists or dicts as arguments won't work, but tuples will.

partial — Baking Arguments Into a Function

The partial function from the functools module python lets you pre-fill some of a function's arguments, creating a new callable that remembers those values. Instead of always passing the same arguments repeatedly, you create a more specific version of a general function once, and then call that.

Think of it this way: you have a general power function. If you always need to square numbers, you shouldn't have to pass the exponent 2 every single call. With python partial function, you create a specialized square from the general power.

from functools import partial

def power(base, exponent):
 return base ** exponent

square = partial(power, exponent=2)
cube = partial(power, exponent=3)

print(square(5))
print(cube(4))
print(square(11))
25
64
121

partial didn't define a new Python function — square and cube are functools.partial objects that remember the pre-set keyword argument. Calling square(5) internally translates to power(5, exponent=2). This works for positional arguments too, not just keyword ones.

Here's a more practical use case — sorting a list of records by a field index, using partial to pin the index:

from functools import partial

employees = [
 ("diana", "engineering", 95000),
 ("alan", "marketing", 72000),
 ("priya", "engineering", 110000),
]

get_column = partial(lambda record, idx: record[idx], idx=2)

sorted_employees = sorted(employees, key=get_column)
for emp in sorted_employees:
 print(emp)
('alan', 'marketing', 72000)
('diana', 'engineering', 95000)
('priya', 'engineering', 110000)

The get_column callable has index 2 locked in, so sorted just needs to pass each record. Python partial function is especially valuable in callback-heavy code — when an API expects a callable with a specific signature but your function needs extra context.

reduce — Collapsing a Sequence into One Value

The python functools reduce function applies a two-argument function cumulatively across a sequence, folding the entire collection down into a single result. It's one of the foundational python higher order functions — borrowed from functional programming — and gives you a clean alternative to explicit accumulator loops.

from functools import reduce

numbers = [4, 7, 2, 9, 3]
product = reduce(lambda acc, x: acc * x, numbers)
print(product)
1512

Here's how that unfolds step by step: 4 * 7 = 28, then 28 * 2 = 56, then 56 * 9 = 504, then 504 * 3 = 1512. Each call takes the accumulated result and the next element, producing a new accumulated result. The python reduce function threads a rolling computation through the entire list.

You can also pass an initial value as the third argument to reduce, which serves as the starting accumulator before the first element is touched:

from functools import reduce

tags = ["python", "functools", "tutorial"]
html = reduce(lambda acc, tag: acc + f"<li>{tag}</li>", tags, "<ul>")
html += "</ul>"
print(html)
<ul><li>python</li><li>functools</li><li>tutorial</li></ul>

The initial value "<ul>" starts the accumulation. Each step appends a list item. The final "</ul>" closes it off. Without an initial value, reduce would use the first element of the list as the starting point, which can cause issues on empty sequences — always pass an initial value when the list might be empty.

wraps — Making Decorators Transparent

When you write a decorator, you're replacing one function with another. The problem is that your wrapper function has its own __name__, __doc__, __module__, and other metadata — so the original function's identity gets silently overwritten. functools wraps python is the fix.

functools.wraps is a decorator you apply inside your decorator, on the wrapper function. It copies the wrapped function's metadata onto the wrapper, so anyone inspecting the function — debuggers, logging tools, test frameworks, documentation generators — sees the real function, not the wrapper.

from functools import wraps

def retry_on_error(func):
 @wraps(func)
 def wrapper(*args, **kwargs):
 for attempt in range(3):
 try:
 return func(*args, **kwargs)
 except Exception as e:
 print(f"Attempt {attempt + 1} failed: {e}")
 raise RuntimeError(f"{func.__name__} failed after 3 attempts")
 return wrapper

@retry_on_error
def fetch_data(url):
 """Fetches JSON data from the given URL."""
 raise ConnectionError("network unreachable")

print(fetch_data.__name__)
print(fetch_data.__doc__)
fetch_data
Fetches JSON data from the given URL.

Without @wraps(func), fetch_data.__name__ would print "wrapper" and fetch_data.__doc__ would be None. The decorator would be leaking its internal plumbing. With functools wraps python applied, the decorator is completely invisible from the outside — the function presents itself exactly as if it were unwrapped.

This matters in production code because logging systems often use __name__ to identify which function generated a log line, and doctests use __doc__. Skipping wraps creates subtle bugs that are annoying to track down.

total_ordering — Full Comparisons from Just Two Methods

If you've ever implemented comparison methods on a Python class, you know that Python technically wants you to define __lt__, __le__, __gt__, and __ge__ — all four, plus __eq__. That's a lot of repetitive code for something logically redundant: if you know how to check equality and how to check "less than", everything else follows mathematically.

functools total_ordering is a class decorator that derives the missing comparison methods for you. You define __eq__ and exactly one of __lt__, __le__, __gt__, or __ge__, and the decorator fills in the rest.

from functools import total_ordering

@total_ordering
class FileSize:
 def __init__(self, name, bytes_count):
 self.name = name
 self.bytes_count = bytes_count

 def __eq__(self, other):
 return self.bytes_count == other.bytes_count

 def __lt__(self, other):
 return self.bytes_count < other.bytes_count

 def __repr__(self):
 return f"FileSize({self.name!r}, {self.bytes_count}B)"

readme = FileSize("README.md", 1024)
video = FileSize("demo.mp4", 52428800)
config = FileSize("config.yaml", 512)

files = [readme, video, config]
print(sorted(files))
print(readme > config)
print(video >= readme)
print(config <= readme)
[FileSize('config.yaml', 512B), FileSize('README.md', 1024B), FileSize('demo.mp4', 52428800B)]
True
True
True

We only defined __eq__ and __lt__, but functools total_ordering derived __gt__, __ge__, and __le__ automatically. Sorting works, all six comparison operators work, and we wrote less than half the comparison code we'd normally need. This is particularly useful when building data model classes that need to be sorted, stored in priority queues, or compared in business logic.

Full Working Example — Grade Book with functools

This program builds a grade book that computes weighted average scores, caches the computation, traces function calls with a decorator, and ranks students using rich comparisons. Every tool from python functools covered above appears here together.

from functools import lru_cache, partial, reduce, wraps, total_ordering


def trace(func):
 @wraps(func)
 def wrapper(*args, **kwargs):
 result = func(*args, **kwargs)
 print(f"[trace] {func.__name__}{args} -> {result}")
 return result
 return wrapper


def weighted_combine(acc, pair):
 score, weight = pair
 return (acc[0] + score * weight, acc[1] + weight)


compute_weighted = partial(reduce, weighted_combine)


@trace
@lru_cache(maxsize=64)
def weighted_average(scores_weights):
 total_score, total_weight = compute_weighted(scores_weights, (0.0, 0.0))
 return round(total_score / total_weight, 2)


@total_ordering
class Student:
 def __init__(self, name, scores_weights):
 self.name = name
 self.scores_weights = scores_weights
 self.avg = weighted_average(self.scores_weights)

 def __eq__(self, other):
 return self.avg == other.avg

 def __lt__(self, other):
 return self.avg < other.avg

 def __repr__(self):
 return f"Student(name={self.name!r}, avg={self.avg})"


students = [
 Student("Alice", ((88, 0.25), (92, 0.50), (79, 0.25))),
 Student("Bob", ((74, 0.25), (85, 0.50), (91, 0.25))),
 Student("Charlie", ((95, 0.25), (83, 0.50), (88, 0.25))),
 Student("Diana", ((80, 0.25), (91, 0.50), (85, 0.25))),
]

print("\n--- Rankings (highest to lowest) ---")
ranked = sorted(students, reverse=True)
for rank, student in enumerate(ranked, 1):
 print(f" #{rank}: {student}")

print(f"\nAlice > Bob? {students[0] > students[1]}")
print(f"Bob >= Charlie? {students[1] >= students[2]}")
print(f"\nCache info: {weighted_average.cache_info()}")
Output
[trace] weighted_average(((88, 0.25), (92, 0.5), (79, 0.25)),) -> 87.75
[trace] weighted_average(((74, 0.25), (85, 0.5), (91, 0.25)),) -> 83.75
[trace] weighted_average(((95, 0.25), (83, 0.5), (88, 0.25)),) -> 87.25
[trace] weighted_average(((80, 0.25), (91, 0.5), (85, 0.25)),) -> 86.75

--- Rankings (highest to lowest) ---
 #1: Student(name='Alice', avg=87.75)
 #2: Student(name='Charlie', avg=87.25)
 #3: Student(name='Diana', avg=86.75)
 #4: Student(name='Bob', avg=83.75)

Alice > Bob? True
Bob >= Charlie? False

Cache info: CacheInfo(hits=0, misses=4, maxsize=64, currsize=4)

Each student's weighted average is computed once and stored by lru_cache — the cache info confirms 4 misses and zero hits because each student has a unique score tuple. The trace decorator (using wraps to stay transparent) shows us exactly when computation happened. Sorting works richly because total_ordering derived __gt__ and __ge__ from the two methods we wrote. And partial locked reduce together with the combine function so the accumulator logic stays in one place.

That is python functools doing what it does best — a small, focused module that quietly eliminates entire categories of repetitive code.