Should I reach into the Django _prefetched_objects_cache to solve an N+1 query?
I have the following Django template code with an N+1 query:
{% for theobject in objs %}
{% for part in theobject.parts_ordered %}
<li>{{ part }}</li>
{% endfor %}
{% endfor %}
Here is parts_ordered on TheObject:
class TheObject:
# ...
def parts_ordered(self) -> list["Part"]:
return self.parts.all().order_by("pk")
And here is the Part object:
class Part:
# ...
theobject = models.ForeignKey(
TheObject, on_delete=models.CASCADE, related_name="parts"
)
and here is the prefetch getting objs:
ofs = ObjectFormSet(
queryset=TheObject.objects
.filter(objectset=os)
.prefetch_related("parts")
)
I think the order_by("pk")
disrupts the prefetch.
This is what chatgpt recommends, and it works (no more N+1 queries, results seem the same):
class TheObject:
# ...
def parts_ordered(self) -> list["Part"]:
if (
hasattr(self, "_prefetched_objects_cache")
and "parts" in self._prefetched_objects_cache
):
# Use prefetched data and sort in Python
return sorted(
self._prefetched_objects_cache["parts"], key=lambda cc: cc.pk
)
# Fallback to querying the DB if prefetching wasn’t used
return self.parts.all().order_by("pk")
Should I rely on _prefetched_objects_cache? Is there a better way?