diff options
author | Robin Murphy <robin.murphy@arm.com> | 2021-08-11 13:21:38 +0100 |
---|---|---|
committer | Joerg Roedel <jroedel@suse.de> | 2021-08-18 13:27:49 +0200 |
commit | 452e69b58c2889e5546edb92d9e66285410f7463 (patch) | |
tree | ba5049ab1aab6991887ab1b1b8baf141f8509e14 /drivers/iommu/iova.c | |
parent | e96763ec42ceb7fc4f1e80b8647bc3ef53b5d286 (diff) |
iommu: Allow enabling non-strict mode dynamically
Allocating and enabling a flush queue is in fact something we can
reasonably do while a DMA domain is active, without having to rebuild it
from scratch. Thus we can allow a strict -> non-strict transition from
sysfs without requiring to unbind the device's driver, which is of
particular interest to users who want to make selective relaxations to
critical devices like the one serving their root filesystem.
Disabling and draining a queue also seems technically possible to
achieve without rebuilding the whole domain, but would certainly be more
involved. Furthermore there's not such a clear use-case for tightening
up security *after* the device may already have done whatever it is that
you don't trust it not to do, so we only consider the relaxation case.
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/d652966348c78457c38bf18daf369272a4ebc2c9.1628682049.git.robin.murphy@arm.com
Signed-off-by: Joerg Roedel <jroedel@suse.de>
Diffstat (limited to 'drivers/iommu/iova.c')
-rw-r--r-- | drivers/iommu/iova.c | 11 |
1 files changed, 6 insertions, 5 deletions
diff --git a/drivers/iommu/iova.c b/drivers/iommu/iova.c index 2ad73fb2e94e..0af42fb93a49 100644 --- a/drivers/iommu/iova.c +++ b/drivers/iommu/iova.c @@ -121,8 +121,6 @@ int init_iova_flush_queue(struct iova_domain *iovad, spin_lock_init(&fq->lock); } - smp_wmb(); - iovad->fq = queue; timer_setup(&iovad->fq_timer, fq_flush_timeout, 0); @@ -633,17 +631,20 @@ void queue_iova(struct iova_domain *iovad, unsigned long pfn, unsigned long pages, unsigned long data) { - struct iova_fq *fq = raw_cpu_ptr(iovad->fq); + struct iova_fq *fq; unsigned long flags; unsigned idx; /* * Order against the IOMMU driver's pagetable update from unmapping * @pte, to guarantee that iova_domain_flush() observes that if called - * from a different CPU before we release the lock below. + * from a different CPU before we release the lock below. Full barrier + * so it also pairs with iommu_dma_init_fq() to avoid seeing partially + * written fq state here. */ - smp_wmb(); + smp_mb(); + fq = raw_cpu_ptr(iovad->fq); spin_lock_irqsave(&fq->lock, flags); /* |