Cached Processes
In this section, you will learn how to use the @cached
annotation to speed up assessments by reducing the size of the system that needs to be solved at runtime.
Why caching?
When running an inventory or impact assessment, the engine builds a system of equations that represents all processes and dependencies in the model. This system is then solved to compute the inventory or the impacts.
For large models, this system can become very large, and solving it in one pass can be costly.
The @cached
annotation allows the engine to pre-solve part of the system (a smaller subsystem of processes) during runtime. The result of that local solve is then reused in the global assessment.
This way, the final system is smaller, and the assessment becomes more efficient.
A simple example
Consider again the sandwich example:
process sandwich_factory {
products {
1 u sandwich
}
inputs {
200 g bread
50 g ham
}
}
process bake {
products {
1 kg bread
}
inputs {
1 kg flour
30 g salt
}
}
process flour_production {
products {
1 kg flour
}
impacts {
2 kg_CO2_Eq GWP
}
}
process salt_production {
products {
1 g salt
}
impacts {
0.05 kg_CO2_Eq GWP
}
}
process ham_production {
products {
1 g ham
}
impacts {
0.7 kg_CO2_Eq GWP
}
}
Without caching, the global solve includes sandwich_factory
, bake
, and all of flour_production
, salt_production
and ham_production
in the same system.
Now, let us add caching:
@cached
process bake {
products {
1 kg bread
}
inputs {
1 kg flour
30 g salt
}
}
With @cached
, when assessing one sandwich, the engine will:
- At runtime, build a subsystem for
bake
and its dependencies (flour_production
,salt_production
). - Solve it immediately.
- Use the result of that sub-solve to represent
bake
in the global solve ofsandwich_factory
.
From the outside, the result is exactly the same — but the global system no longer need to carry the flour_production
and salt_production
processes.
It is equivalent to rewriting the bake
process with the impacts of flour_production
and salt_production
collapsed:
// equivalent submodel
process bake {
products {
1 kg bread
}
impacts {
3.5 kg_CO2_Eq GWP // 1*2 kg_CO2_Eq from `flour_production` + 30*0.05 kg_CO2_Eq from `salt_production`
}
}
The difference is that with @cached
you don’t have to do this rewriting yourself: the aggregation happens automatically at runtime.
What happens at runtime
Using @cached
changes the solving pipeline dynamically:
- Local solve at runtime
- The assessment engine detects a
@cached
process. - It extracts the subsystem of that process and its dependencies.
- It solves it immediately and records the equivalent flows.
- The assessment engine detects a
- Global solve with reduced system
- The global system is built as usual, but now
@cached
processes are represented by their pre-solved equivalents. - The global system is smaller, and therefore faster to solve.
- The global system is built as usual, but now
Importantly:
- The process definition itself is never rewritten.
- The optimization happens transparently during analysis.
When to use caching
The @cached
annotation is useful when:
- A process has heavy dependencies that slow down global solving.
- A process (or subsystem) is reused many times in different parts of the model.
- You want to speed up runtime while preserving correct results.
It is less useful when:
- You want to inspect the detailed contributions of a process’s dependencies in the global system (since they are collapsed at runtime).
- The process depends on parameters that change frequently — the sub-solve will need to be recomputed anyway.
In short: @cached
is a runtime optimization that allows the assessment engine to locally pre-solve subsystems, reducing the size of the global system and making assessments faster, without changing your process definitions.