Django/PostgreSQL: Unique constraint with a dynamic condition like expires_at > now()
I'm building a secure OTP system in Django. The model looks like this:
class OTP(models.Model):
MAX_ATTEMPTS = 3
DEFAULT_EXPIRE_IN_MINUTES = 1
class Purposes(models.IntegerChoices):
LOGIN = 1, "Login"
REGISTER = 2, "Register"
VERIFY_PHONE = 3, "Verify Phone"
VERIFY_EMAIL = 4, "Verify Email"
otp_hash = models.CharField(max_length=64)
purpose = models.IntegerField(choices=Purposes)
phone = PhoneNumberField(null=True, blank=True)
email = models.EmailField(null=True, blank=True)
created_at = models.DateTimeField(auto_now_add=True)
expires_at = models.DateTimeField()
failed_attempts = models.PositiveSmallIntegerField(default=0)
used = models.BooleanField(default=False)
used_at = models.DateTimeField(null=True, blank=True)
What I want to enforce:
A user should not receive another OTP if there is already an unexpired (based on expires_at
field), unused (used=False
) OTP for the same purpose.
The condition should apply either for:
- (phone, purpose) if phone is set, or
- (email, purpose) if email is set.
And this logic must hold true even under concurrent requests.
What I've tried so far:
- Application-level check with OTP.objects.filter(...) before creating a new one — this is not safe under race conditions.
- Adding a is_expired boolean field and a conditional UniqueConstraint like this:
class Meta:
constraints = [
models.UniqueConstraint(
fields=["phone", "purpose"],
condition=Q(is_expired=False),
name="unique_active_phone_otp"
),
models.UniqueConstraint(
fields=["email", "purpose"],
condition=Q(is_expired=False),
name="unique_active_email_otp"
),
]
But this requires updating the is_expired field manually or periodically when expires_at < now(), which adds operational complexity and is easy to forget.
Is there a cleaner way to enforce this uniqueness at the database level, even with time-based conditions like expires_at > now()
?
Below Postgres version 18 you can set up an exclusion constraint to make sure there can't exist two OTPs of a given type, valid during the same period. You'll need to swap out expires_at
for a valid_between
that holds a tstzrange
of timestamps, during which that OTP is valid:
demo at db<>fiddle
--btree_gist lets `purpose` into the gist index behind the exclusion constraint
create extension btree_gist;
create table otp(purpose int,expires_at timestamptz);
alter table otp rename column expires_at to valid_between;
alter table otp
alter column valid_between type tstzrange
using tstzrange(valid_between,valid_between+'1 minute'::interval)
,add constraint one_valid_otp_per_purpose_at_a_time
exclude using gist ( purpose WITH =
,valid_between WITH &&);
insert into otp values
(1,tstzrange(now(),now()+'1 minute'::interval))
, (2,tstzrange(now(),now()+'1 minute'::interval));
insert into otp values
(1,tstzrange(now()+'10 seconds',now()+'65 seconds'::interval));
ERROR: conflicting key value violates exclusion constraint "one_valid_otp_per_purpose_at_a_time" DETAIL: Key (purpose, valid_between)=(1, ["2025-06-18 12:50:21.469054+00","2025-06-18 12:51:16.469054+00")) conflicts with existing key (purpose, valid_between)=(1, ["2025-06-18 12:50:11.468011+00","2025-06-18 12:51:11.468011+00")).
Postgres 18 brings without overlap
syntax to unique
constraints which does the same:
alter table otp
add constraint one_valid_otp_per_purpose_at_a_time
unique(purpose,valid_between without overlap);
To actually check that flag, use a view
(or a virtual
generated column in version >18) for your is_expired
field. That way this flag can be calculated at query time by comparing now()
to valid_between
.
create view v_otp as
select*,not (now() <@ valid_between) as is_expired
from otp;
select*from v_otp;
purpose | valid_between | is_expired |
---|---|---|
1 | ["2025-06-18 12:57:37.530795+00","2025-06-18 12:57:42.530795+00") | False |
2 | ["2025-06-18 12:57:37.530795+00","2025-06-18 12:57:42.530795+00") | False |
select pg_sleep(6);
select now();
select*from v_otp;
now |
---|
2025-06-18 12:57:43.539171+00 |
purpose | valid_between | is_expired |
---|---|---|
1 | ["2025-06-18 12:57:37.530795+00","2025-06-18 12:57:42.530795+00") | True |
2 | ["2025-06-18 12:57:37.530795+00","2025-06-18 12:57:42.530795+00") | True |
On its own, the view/virtual generated column is not supposed to solve the race condition. That's done via locking - implicit, explicit, advisory. Your client is meant to query the view and pick up a still valid OTP if it's there (I guess that's your intent). If not the unique/exclude constraint makes sure that if multiple requests attempt to generate a new one, only one of them succeeds - the rest can catch the exception raised by them being rejected.
You can pg_advisory_lock()
a given OTP(purpose+owner) even before a check, or explicitly select..for update
, or just attempt to update
it (used
and used_at
fields), since update
will attempt to acquire a lock. Any of these would result in the db arranging your parallel client nodes in a queue.