summaryrefslogtreecommitdiff
path: root/kernel/dma
AgeCommit message (Collapse)Author
2023-04-06swiotlb: fix a braino in the alignment check fixPetr Tesarik
The alignment mask in swiotlb_do_find_slots() masks off the high bits which are not relevant for the alignment, so multiple requirements are combined with a bitwise OR rather than AND. In plain English, the stricter the alignment, the more bits must be set in iotlb_align_mask. Confusion may arise from the fact that the same variable is also used to mask off the offset within a swiotlb slot, which is achieved with a bitwise AND. Fixes: 0eee5ae10256 ("swiotlb: fix slot alignment checks") Reported-by: Dexuan Cui <decui@microsoft.com> Link: https://lore.kernel.org/all/CAA42JLa1y9jJ7BgQvXeUYQh-K2mDNHd2BYZ4iZUz33r5zY7oAQ@mail.gmail.com/ Reported-by: Kelsey Steele <kelseysteele@linux.microsoft.com> Link: https://lore.kernel.org/all/20230405003549.GA21326@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net/ Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com> Tested-by: Dexuan Cui <decui@microsoft.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2023-03-22swiotlb: fix slot alignment checksPetr Tesarik
Explicit alignment and page alignment are used only to calculate the stride, not when checking actual slot physical address. Originally, only page alignment was implemented, and that worked, because the whole SWIOTLB is allocated on a page boundary, so aligning the start index was sufficient to ensure a page-aligned slot. When commit 1f221a0d0dbf ("swiotlb: respect min_align_mask") added support for min_align_mask, the index could be incremented in the search loop, potentially finding an unaligned slot if minimum device alignment is between IO_TLB_SIZE and PAGE_SIZE. The bug could go unnoticed, because the slot size is 2 KiB, and the most common page size is 4 KiB, so there is no alignment value in between. IIUC the intention has been to find a slot that conforms to all alignment constraints: device minimum alignment, an explicit alignment (given as function parameter) and optionally page alignment (if allocation size is >= PAGE_SIZE). The most restrictive mask can be trivially computed with logical AND. The rest can stay. Fixes: 1f221a0d0dbf ("swiotlb: respect min_align_mask") Fixes: e81e99bacc9f ("swiotlb: Support aligned swiotlb buffers") Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2023-03-22swiotlb: use wrap_area_index() instead of open-coding itPetr Tesarik
No functional change, just use an existing helper. Signed-off-by: Petr Tesarik <petr.tesarik.ext@huawei.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2023-03-15swiotlb: fix the deadlock in swiotlb_do_find_slotsGuoRui.Yu
In general, if swiotlb is sufficient, the logic of index = wrap_area_index(mem, index + 1) is fine, it will quickly take a slot and release the area->lock; But if swiotlb is insufficient and the device has min_align_mask requirements, such as NVME, we may not be able to satisfy index == wrap and exit the loop properly. In this case, other kernel threads will not be able to acquire the area->lock and release the slot, resulting in a deadlock. The current implementation of wrap_area_index does not involve a modulo operation, so adjusting the wrap to ensure the loop ends is not trivial. Introduce a new variable to record the number of loops and exit the loop after completing the traversal. Backtraces: Other CPUs are waiting this core to exit the swiotlb_do_find_slots loop. [10199.924391] RIP: 0010:swiotlb_do_find_slots+0x1fe/0x3e0 [10199.924403] Call Trace: [10199.924404] <TASK> [10199.924405] swiotlb_tbl_map_single+0xec/0x1f0 [10199.924407] swiotlb_map+0x5c/0x260 [10199.924409] ? nvme_pci_setup_prps+0x1ed/0x340 [10199.924411] dma_direct_map_page+0x12e/0x1c0 [10199.924413] nvme_map_data+0x304/0x370 [10199.924415] nvme_prep_rq.part.0+0x31/0x120 [10199.924417] nvme_queue_rq+0x77/0x1f0 ... [ 9639.596311] NMI backtrace for cpu 48 [ 9639.596336] Call Trace: [ 9639.596337] [ 9639.596338] _raw_spin_lock_irqsave+0x37/0x40 [ 9639.596341] swiotlb_do_find_slots+0xef/0x3e0 [ 9639.596344] swiotlb_tbl_map_single+0xec/0x1f0 [ 9639.596347] swiotlb_map+0x5c/0x260 [ 9639.596349] dma_direct_map_sg+0x7a/0x280 [ 9639.596352] __dma_map_sg_attrs+0x30/0x70 [ 9639.596355] dma_map_sgtable+0x1d/0x30 [ 9639.596356] nvme_map_data+0xce/0x370 ... [ 9639.595665] NMI backtrace for cpu 50 [ 9639.595682] Call Trace: [ 9639.595682] [ 9639.595683] _raw_spin_lock_irqsave+0x37/0x40 [ 9639.595686] swiotlb_release_slots.isra.0+0x86/0x180 [ 9639.595688] dma_direct_unmap_sg+0xcf/0x1a0 [ 9639.595690] nvme_unmap_data.part.0+0x43/0xc0 Fixes: 1f221a0d0dbf ("swiotlb: respect min_align_mask") Signed-off-by: GuoRui.Yu <GuoRui.Yu@linux.alibaba.com> Signed-off-by: Xiaokang Hu <xiaokang.hxk@alibaba-inc.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2023-02-22swiotlb: mark swiotlb_memblock_alloc() as __initRandy Dunlap
swiotlb_memblock_alloc() calls memblock_alloc(), which calls (__init) memblock_alloc_try_nid(). However, swiotlb_membloc_alloc() can be marked as __init since it is only called by swiotlb_init_remap(), which is already marked as __init. This prevents a modpost build warning/error: WARNING: modpost: vmlinux.o: section mismatch in reference: swiotlb_memblock_alloc (section: .text) -> memblock_alloc_try_nid (section: .init.text) WARNING: modpost: vmlinux.o: section mismatch in reference: swiotlb_memblock_alloc (section: .text) -> memblock_alloc_try_nid (section: .init.text) This fixes the build warning/error seen on ARM64, PPC64, S390, i386, and x86_64. Fixes: 8d58aa484920 ("swiotlb: reduce the swiotlb buffer size on allocation failure") Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Cc: Alexey Kardashevskiy <aik@amd.com> Cc: Christoph Hellwig <hch@lst.de> Cc: iommu@lists.linux.dev Cc: Mike Rapoport <rppt@kernel.org> Cc: linux-mm@kvack.org Signed-off-by: Christoph Hellwig <hch@lst.de>
2023-02-16swiotlb: remove swiotlb_max_segmentChristoph Hellwig
swiotlb_max_segment has always been a bogus API, so remove it now that the remaining callers are gone. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Robin Murphy <robin.murphy@arm.com>
2022-12-21dma-mapping: reject GFP_COMP for noncoherent allocationsChristoph Hellwig
While not quite as bogus as for the dma-coherent allocations that were fixed earlier, GFP_COMP for these allocations has no benefits for the dma-direct case, and can't be supported at all by dma dma-iommu backend which splits up allocations into smaller orders. Due to an oversight in ffcb75458460 that flag stopped being cleared for all dma allocations, but only got rejected for coherent ones, so fix up these callers to not allow __GFP_COMP as well after the sound code has been fixed to not ask for it. Fixes: ffcb75458460 ("dma-mapping: reject __GFP_COMP in dma_alloc_attrs") Reported-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com> Reported-by: Kai Vehmanen <kai.vehmanen@linux.intel.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Takashi Iwai <tiwai@suse.de> Tested-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com> Tested-by: Kai Vehmanen <kai.vehmanen@linux.intel.com>
2022-11-21dma-mapping: reject __GFP_COMP in dma_alloc_attrsChristoph Hellwig
DMA allocations can never be turned back into a page pointer, so requesting compound pages doesn't make sense and it can't even be supported at all by various backends. Reject __GFP_COMP with a warning in dma_alloc_attrs, and stop clearing the flag in the arm dma ops and dma-iommu. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Marek Szyprowski <m.szyprowski@samsung.com>
2022-11-01swiotlb: reduce the swiotlb buffer size on allocation failureAlexey Kardashevskiy
At the moment the AMD encrypted platform reserves 6% of RAM for SWIOTLB or 1GB, whichever is less. However it is possible that there is no block big enough in the low memory which make SWIOTLB allocation fail and the kernel continues without DMA. In such case a VM hangs on DMA. This moves alloc+remap to a helper and calls it from a loop where the size is halved on each iteration. This updates default_nslabs on successful allocation which looks like an oversight as not doing so should have broken callers of swiotlb_size_or_default(). Signed-off-by: Alexey Kardashevskiy <aik@amd.com> Reviewed-by: Pankaj Gupta <pankaj.gupta@amd.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-10-10Merge tag 'mm-stable-2022-10-08' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Pull MM updates from Andrew Morton: - Yu Zhao's Multi-Gen LRU patches are here. They've been under test in linux-next for a couple of months without, to my knowledge, any negative reports (or any positive ones, come to that). - Also the Maple Tree from Liam Howlett. An overlapping range-based tree for vmas. It it apparently slightly more efficient in its own right, but is mainly targeted at enabling work to reduce mmap_lock contention. Liam has identified a number of other tree users in the kernel which could be beneficially onverted to mapletrees. Yu Zhao has identified a hard-to-hit but "easy to fix" lockdep splat at [1]. This has yet to be addressed due to Liam's unfortunately timed vacation. He is now back and we'll get this fixed up. - Dmitry Vyukov introduces KMSAN: the Kernel Memory Sanitizer. It uses clang-generated instrumentation to detect used-unintialized bugs down to the single bit level. KMSAN keeps finding bugs. New ones, as well as the legacy ones. - Yang Shi adds a userspace mechanism (madvise) to induce a collapse of memory into THPs. - Zach O'Keefe has expanded Yang Shi's madvise(MADV_COLLAPSE) to support file/shmem-backed pages. - userfaultfd updates from Axel Rasmussen - zsmalloc cleanups from Alexey Romanov - cleanups from Miaohe Lin: vmscan, hugetlb_cgroup, hugetlb and memory-failure - Huang Ying adds enhancements to NUMA balancing memory tiering mode's page promotion, with a new way of detecting hot pages. - memcg updates from Shakeel Butt: charging optimizations and reduced memory consumption. - memcg cleanups from Kairui Song. - memcg fixes and cleanups from Johannes Weiner. - Vishal Moola provides more folio conversions - Zhang Yi removed ll_rw_block() :( - migration enhancements from Peter Xu - migration error-path bugfixes from Huang Ying - Aneesh Kumar added ability for a device driver to alter the memory tiering promotion paths. For optimizations by PMEM drivers, DRM drivers, etc. - vma merging improvements from Jakub Matěn. - NUMA hinting cleanups from David Hildenbrand. - xu xin added aditional userspace visibility into KSM merging activity. - THP & KSM code consolidation from Qi Zheng. - more folio work from Matthew Wilcox. - KASAN updates from Andrey Konovalov. - DAMON cleanups from Kaixu Xia. - DAMON work from SeongJae Park: fixes, cleanups. - hugetlb sysfs cleanups from Muchun Song. - Mike Kravetz fixes locking issues in hugetlbfs and in hugetlb core. Link: https://lkml.kernel.org/r/CAOUHufZabH85CeUN-MEMgL8gJGzJEWUrkiM58JkTbBhh-jew0Q@mail.gmail.com [1] * tag 'mm-stable-2022-10-08' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (555 commits) hugetlb: allocate vma lock for all sharable vmas hugetlb: take hugetlb vma_lock when clearing vma_lock->vma pointer hugetlb: fix vma lock handling during split vma and range unmapping mglru: mm/vmscan.c: fix imprecise comments mm/mglru: don't sync disk for each aging cycle mm: memcontrol: drop dead CONFIG_MEMCG_SWAP config symbol mm: memcontrol: use do_memsw_account() in a few more places mm: memcontrol: deprecate swapaccounting=0 mode mm: memcontrol: don't allocate cgroup swap arrays when memcg is disabled mm/secretmem: remove reduntant return value mm/hugetlb: add available_huge_pages() func mm: remove unused inline functions from include/linux/mm_inline.h selftests/vm: add selftest for MADV_COLLAPSE of uffd-minor memory selftests/vm: add file/shmem MADV_COLLAPSE selftest for cleared pmd selftests/vm: add thp collapse shmem testing selftests/vm: add thp collapse file and tmpfs testing selftests/vm: modularize thp collapse memory operations selftests/vm: dedup THP helpers mm/khugepaged: add tracepoint to hpage_collapse_scan_file() mm/madvise: add file and shmem support to MADV_COLLAPSE ...
2022-10-03dma: kmsan: unpoison DMA mappingsAlexander Potapenko
KMSAN doesn't know about DMA memory writes performed by devices. We unpoison such memory when it's mapped to avoid false positive reports. Link: https://lkml.kernel.org/r/20220915150417.722975-22-glider@google.com Signed-off-by: Alexander Potapenko <glider@google.com> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Alexei Starovoitov <ast@kernel.org> Cc: Andrey Konovalov <andreyknvl@gmail.com> Cc: Andrey Konovalov <andreyknvl@google.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Christoph Hellwig <hch@lst.de> Cc: Christoph Lameter <cl@linux.com> Cc: David Rientjes <rientjes@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Eric Biggers <ebiggers@google.com> Cc: Eric Biggers <ebiggers@kernel.org> Cc: Eric Dumazet <edumazet@google.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Herbert Xu <herbert@gondor.apana.org.au> Cc: Ilya Leoshkevich <iii@linux.ibm.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jens Axboe <axboe@kernel.dk> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Kees Cook <keescook@chromium.org> Cc: Marco Elver <elver@google.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Michael S. Tsirkin <mst@redhat.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Petr Mladek <pmladek@suse.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Vasily Gorbik <gor@linux.ibm.com> Cc: Vegard Nossum <vegard.nossum@oracle.com> Cc: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
2022-09-20swiotlb: don't panic!Robin Murphy
The panics in swiotlb are relics of a bygone era, some of them inadvertently inherited from a memblock refactor, and all of them unnecessary since they are in places that may also fail gracefully anyway. Convert the panics in swiotlb_init_remap() into non-fatal warnings more consistent with the other bail-out paths there and in swiotlb_init_late() (but don't bother trying to roll anything back, since if anything does actually fail that early, the aim is merely to keep going as far as possible to get more diagnostic information out of the inevitably-dying kernel). It's not for SWIOTLB to decide that the system is terminally compromised just because there *might* turn out to be one or more 32-bit devices that might want to make streaming DMA mappings, especially since we already handle the no-buffer case later if it turns out someone did want it. Similarly though, downgrade that panic in swiotlb_tbl_map_single(), since even if we do get to that point it's an overly extreme reaction. It makes little difference to the DMA API caller whether a mapping fails because the buffer is full or because there is no buffer, and once again it's not for SWIOTLB to presume that any particular DMA mapping is so fundamental to the operation of the system that it must be terminal if it could never succeed. Even if the caller handles failure by futilely retrying forever, a single stuck thread is considerably less impactful to the user than a needless panic. Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-09-20swiotlb: replace kmap_atomic() with memcpy_{from,to}_page()Fabio M. De Francesco
The use of kmap_atomic() is being deprecated in favor of kmap_local_page(), which can also be used in atomic context (including interrupts). Replace kmap_atomic() with kmap_local_page(). Instead of open coding mapping, memcpy(), and un-mapping, use the memcpy_{from,to}_page() helper. Suggested-by: Ira Weiny <ira.weiny@intel.com> Signed-off-by: Fabio M. De Francesco <fmdefrancesco@gmail.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-09-07dma-mapping: mark dma_supported staticChristoph Hellwig
Now that the remaining users in drivers are gone, this function can be marked static. Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-09-07swiotlb: fix a typoChao Gao
"overwirte" isn't a word. It should be "overwrite". Signed-off-by: Chao Gao <chao.gao@intel.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-09-07swiotlb: avoid potential left shift overflowChao Gao
The second operand passed to slot_addr() is declared as int or unsigned int in all call sites. The left-shift to get the offset of a slot can overflow if swiotlb size is larger than 4G. Convert the macro to an inline function and declare the second argument as phys_addr_t to avoid the potential overflow. Fixes: 26a7e094783d ("swiotlb: refactor swiotlb_tbl_map_single") Signed-off-by: Chao Gao <chao.gao@intel.com> Reviewed-by: Dongli Zhang <dongli.zhang@oracle.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-09-07dma-debug: improve search for partial syncsRobin Murphy
When bucket_find_contains() tries to find the original entry for a partial sync, it manages to constrain its search in a way that is both too restrictive and not restrictive enough. A driver which only uses single mappings rather than scatterlists might not set max_seg_size, but could still technically perform a partial sync at an offset of more than 64KB into a sufficiently large mapping, so we could stop searching too early before reaching a legitimate entry. Conversely, if no valid entry is present and max_range is large enough, we can pointlessly search buckets that we've already searched, or that represent an impossible wrapping around the bottom of the address space. At worst, the (legitimate) case of max_seg_size == UINT_MAX can make the loop infinite. Replace the fragile and frankly hard-to-follow "range" logic with a simple counted loop for the number of possible hash buckets below the given address. Reported-by: Yunfei Wang <yf.wang@mediatek.com> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-09-07Revert "swiotlb: panic if nslabs is too small"Yu Zhao
This reverts commit 0bf28fc40d89b1a3e00d1b79473bad4e9ca20ad1. Reasons: 1. new panic()s shouldn't be added [1]. 2. It does no "cleanup" but breaks MIPS [2]. v2: properly solved the conflict [3] with commit 20347fca71a38 ("swiotlb: split up the global swiotlb lock") Reported-by: kernel test robot <lkp@intel.com> Reported-by: Dan Carpenter <dan.carpenter@oracle.com> [1] https://lore.kernel.org/r/CAHk-=wit-DmhMfQErY29JSPjFgebx_Ld+pnerc4J2Ag990WwAA@mail.gmail.com/ [2] https://lore.kernel.org/r/20220820012031.1285979-1-yuzhao@google.com/ [3] https://lore.kernel.org/r/202208310701.LKr1WDCh-lkp@intel.com/ Fixes: 0bf28fc40d89b ("swiotlb: panic if nslabs is too small") Signed-off-by: Yu Zhao <yuzhao@google.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-08-08Merge tag 'rproc-v5.20' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/remoteproc/linux Pull remoteproc updates from Bjorn Andersson: "This introduces support for the remoteproc on Mediatek MT8188, and enables caches for MT8186 SCP. It adds support for PRU cores found on the TI K3 AM62x SoCs. It moves the recovery work after a firmware crash to an unbound workqueue, to allow recovery to happen in parallel. A new DMA API is introduced to release dma_mem for a device. It adds support a panic handler for the Qualcomm modem remoteproc, with the goal of having caches flushed in memory dumps for post-mortem debugging and it introduces a mechanism to wait for the modem firmware on SM8450 to decrypt part of its memory for post-mortem debugging. Qualcomm sysmon is restricted to only inform remote processors about peers that are actually running, to avoid a race where Linux tries to notify a recovering remote processor about its peers new state. A mechanism for waiting for the sysmon connection to be established is also introduced, to avoid out-of-sync updates for rapidly restarting remote processors. A number of Devicetree binding cleanups and conversions to YAML are introduced, to facilitate Devicetree validation. Lastly it introduces a number of smaller fixes and cleanups in the core and a few different drivers" * tag 'rproc-v5.20' of git://git.kernel.org/pub/scm/linux/kernel/git/remoteproc/linux: (42 commits) remoteproc: qcom_q6v5_pas: Do not fail if regulators are not found drivers/remoteproc: fix repeated words in comments remoteproc: Directly use ida_alloc()/free() remoteproc: Use unbounded workqueue for recovery work remoteproc: using pm_runtime_resume_and_get instead of pm_runtime_get_sync remoteproc: qcom_q6v5_pas: Deal silently with optional px and cx regulators remoteproc: sysmon: Send sysmon state only for running rprocs remoteproc: sysmon: Wait for SSCTL service to come up remoteproc: qcom: q6v5: Set q6 state to offline on receiving wdog irq remoteproc: qcom: pas: Check if coredump is enabled remoteproc: qcom: pas: Mark devices as wakeup capable remoteproc: qcom: pas: Mark va as io memory remoteproc: qcom: pas: Add decrypt shutdown support for modem remoteproc: qcom: q6v5-mss: add powerdomains to MSM8996 config remoteproc: qcom_q6v5: Introduce panic handler for MSS remoteproc: qcom_q6v5_mss: Update MBA log info remoteproc: qcom: correct kerneldoc remoteproc: qcom_q6v5_mss: map/unmap metadata region before/after use remoteproc: qcom: using pm_runtime_resume_and_get to simplify the code remoteproc: mediatek: Support MT8188 SCP ...
2022-08-06Merge tag 'dma-mapping-5.20-2022-08-06' of ↵Linus Torvalds
git://git.infradead.org/users/hch/dma-mapping Pull dma-mapping updates from Christoph Hellwig: - convert arm32 to the common dma-direct code (Arnd Bergmann, Robin Murphy, Christoph Hellwig) - restructure the PCIe peer to peer mapping support (Logan Gunthorpe) - allow the IOMMU code to communicate an optional DMA mapping length and use that in scsi and libata (John Garry) - split the global swiotlb lock (Tianyu Lan) - various fixes and cleanup (Chao Gao, Dan Carpenter, Dongli Zhang, Lukas Bulwahn, Robin Murphy) * tag 'dma-mapping-5.20-2022-08-06' of git://git.infradead.org/users/hch/dma-mapping: (45 commits) swiotlb: fix passing local variable to debugfs_create_ulong() dma-mapping: reformat comment to suppress htmldoc warning PCI/P2PDMA: Remove pci_p2pdma_[un]map_sg() RDMA/rw: drop pci_p2pdma_[un]map_sg() RDMA/core: introduce ib_dma_pci_p2p_dma_supported() nvme-pci: convert to using dma_map_sgtable() nvme-pci: check DMA ops when indicating support for PCI P2PDMA iommu/dma: support PCI P2PDMA pages in dma-iommu map_sg iommu: Explicitly skip bus address marked segments in __iommu_map_sg() dma-mapping: add flags to dma_map_ops to indicate PCI P2PDMA support dma-direct: support PCI P2PDMA pages in dma-direct map_sg dma-mapping: allow EREMOTEIO return code for P2PDMA transfers PCI/P2PDMA: Introduce helpers for dma_map_sg implementations PCI/P2PDMA: Attempt to set map_type if it has not been set lib/scatterlist: add flag for indicating P2PDMA segments in an SGL swiotlb: clean up some coding style and minor issues dma-mapping: update comment after dmabounce removal scsi: sd: Add a comment about limiting max_sectors to shost optimal limit ata: libata-scsi: cap ata_device->max_sectors according to shost->max_sectors scsi: scsi_transport_sas: cap shost opt_sectors according to DMA optimal limit ...
2022-07-28swiotlb: fix passing local variable to debugfs_create_ulong()Tianyu Lan
Debugfs node will be run-timely checked and so local variable should be not passed to debugfs_create_ulong(). Fix it via debugfs_create_file() to create io_tlb_used node and calculate used io tlb number with fops_io_tlb_used attribute. Fixes: 20347fca71a3 ("swiotlb: split up the global swiotlb lock") Signed-off-by: Tianyu Lan <tiala@microsoft.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-28dma-mapping: reformat comment to suppress htmldoc warningLogan Gunthorpe
make html doc reports a cryptic warning with the commit named below: kernel/dma/mapping.c:258: WARNING: Option list ends without a blank line; unexpected unindent. Seems the parser is a bit fussy about the tabbing and having a single space tab causes the warning. To suppress the warning add another tab to the list and reindent everything. Fixes: 7c2645a2a30a ("dma-mapping: allow EREMOTEIO return code for P2PDMA transfers") Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-26dma-mapping: add flags to dma_map_ops to indicate PCI P2PDMA supportLogan Gunthorpe
Add a flags member to the dma_map_ops structure with one flag to indicate support for PCI P2PDMA. Also, add a helper to check if a device supports PCI P2PDMA. Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-26dma-direct: support PCI P2PDMA pages in dma-direct map_sgLogan Gunthorpe
Add PCI P2PDMA support for dma_direct_map_sg() so that it can map PCI P2PDMA pages directly without a hack in the callers. This allows for heterogeneous SGLs that contain both P2PDMA and regular pages. A P2PDMA page may have three possible outcomes when being mapped: 1) If the data path between the two devices doesn't go through the root port, then it should be mapped with a PCI bus address 2) If the data path goes through the host bridge, it should be mapped normally, as though it were a CPU physical address 3) It is not possible for the two devices to communicate and thus the mapping operation should fail (and it will return -EREMOTEIO). SGL segments that contain PCI bus addresses are marked with sg_dma_mark_pci_p2pdma() and are ignored when unmapped. P2PDMA mappings are also failed if swiotlb needs to be used on the mapping. Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-26dma-mapping: allow EREMOTEIO return code for P2PDMA transfersLogan Gunthorpe
Add EREMOTEIO error return to dma_map_sgtable() which will be used by .map_sg() implementations that detect P2PDMA pages that the underlying DMA device cannot access. Signed-off-by: Logan Gunthorpe <logang@deltatee.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-22swiotlb: clean up some coding style and minor issuesTianyu Lan
- Fix the used field of struct io_tlb_area wasn't initialized - Set area number to be 0 if input area number parameter is 0 - Use array_size() to calculate io_tlb_area array size - Make parameters of swiotlb_do_find_slots() more reasonable Fixes: 26ffb91fa5e0 ("swiotlb: split up the global swiotlb lock") Signed-off-by: Tianyu Lan <tiala@microsoft.com> Reviewed-by: Michael Kelley <mikelley@microsoft.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-19dma-mapping: add dma_opt_mapping_size()John Garry
Streaming DMA mapping involving an IOMMU may be much slower for larger total mapping size. This is because every IOMMU DMA mapping requires an IOVA to be allocated and freed. IOVA sizes above a certain limit are not cached, which can have a big impact on DMA mapping performance. Provide an API for device drivers to know this "optimal" limit, such that they may try to produce mapping which don't exceed it. Signed-off-by: John Garry <john.garry@huawei.com> Reviewed-by: Damien Le Moal <damien.lemoal@opensource.wdc.com> Acked-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-18swiotlb: move struct io_tlb_slot to swiotlb.cChristoph Hellwig
No need to expose this structure definition in the header. Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-18swiotlb: ensure a segment doesn't cross the area boundaryChao Gao
Free slots tracking assumes that slots in a segment can be allocated to fulfill a request. This implies that slots in a segment should belong to the same area. Although the possibility of a violation is low, it is better to explicitly enforce segments won't span multiple areas by adjusting the number of slabs when configuring areas. Signed-off-by: Chao Gao <chao.gao@intel.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-18swiotlb: consolidate rounding up default_nslabsChao Gao
default_nslabs are rounded up in two cases with exactly same comments. Add a simple wrapper to reduce duplicate code/comments. It is preparatory to adding more logics into the round-up. No functional change intended. Signed-off-by: Chao Gao <chao.gao@intel.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-18swiotlb: remove unused fields in io_tlb_memChao Gao
Commit 20347fca71a3 ("swiotlb: split up the global swiotlb lock") splits io_tlb_mem into multiple areas. Each area has its own lock and index. The global ones are not used so remove them. Signed-off-by: Chao Gao <chao.gao@intel.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-18swiotlb: fix use after free on error handling pathDan Carpenter
Don't dereference "mem" after it has been freed. Flip the two kfree()s around to address this bug. Fixes: 26ffb91fa5e0 ("swiotlb: split up the global swiotlb lock") Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-13swiotlb: split up the global swiotlb lockTianyu Lan
Traditionally swiotlb was not performance critical because it was only used for slow devices. But in some setups, like TDX/SEV confidential guests, all IO has to go through swiotlb. Currently swiotlb only has a single lock. Under high IO load with multiple CPUs this can lead to significat lock contention on the swiotlb lock. This patch splits the swiotlb bounce buffer pool into individual areas which have their own lock. Each CPU tries to allocate in its own area first. Only if that fails does it search other areas. On freeing the allocation is freed into the area where the memory was originally allocated from. Area number can be set via swiotlb kernel parameter and is default to be possible cpu number. If possible cpu number is not power of 2, area number will be round up to the next power of 2. This idea from Andi Kleen patch(https://github.com/intel/tdx/commit/ 4529b5784c141782c72ec9bd9a92df2b68cb7d45). Based-on-idea-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-07-12swiotlb: fail map correctly with failed io_tlb_default_memRobin Murphy
In the failure case of trying to use a buffer which we'd previously failed to allocate, the "!mem" condition is no longer sufficient since io_tlb_default_mem became static and assigned by default. Update the condition to work as intended per the rest of that conversion. Fixes: 463e862ac63e ("swiotlb: Convert io_default_tlb_mem to static allocation") Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-06-24dma-mapping: Add dma_release_coherent_memory to DMA APIMark-PK Tsai
Add dma_release_coherent_memory to DMA API to allow dma user call it to release dev->dma_mem when the device is removed. Signed-off-by: Mark-PK Tsai <mark-pk.tsai@mediatek.com> Acked-by: Christoph Hellwig <hch@lst.de> Link: https://lore.kernel.org/r/20220422062436.14384-2-mark-pk.tsai@mediatek.com Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
2022-06-23dma-direct: use the correct size for dma_set_encrypted()Dexuan Cui
The third parameter of dma_set_encrypted() is a size in bytes rather than the number of pages. Fixes: 4d0564785bb0 ("dma-direct: factor out dma_set_{de,en}crypted helpers") Signed-off-by: Dexuan Cui <decui@microsoft.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-06-22swiotlb: panic if nslabs is too smallDongli Zhang
Panic on purpose if nslabs is too small, in order to sync with the remap retry logic. In addition, print the number of bytes for tlb alloc failure. Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-06-22swiotlb: remove a useless return in swiotlb_initDongli Zhang
Both swiotlb_init_remap() and swiotlb_init() have return type void. Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-06-02swiotlb: fix setting ->force_bounceChristoph Hellwig
The swiotlb_init refactor messed up assigning ->force_bounce by doing it in different places based on what caused the setting of the flag. Fix this by passing the SWIOTLB_* flags to swiotlb_init_io_tlb_mem and just setting it there. Fixes: c6af2aa9ffc9 ("swiotlb: make the swiotlb_init interface more useful") Reported-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Christoph Hellwig <hch@lst.de> Tested-by: Nathan Chancellor <nathan@kernel.org>
2022-06-02dma-debug: make things less spammy under memory pressureRob Clark
Limit the error msg to avoid flooding the console. If you have a lot of threads hitting this at once, they could have already gotten passed the dma_debug_disabled() check before they get to the point of allocation failure, resulting in quite a lot of this error message spamming the log. Use pr_err_once() to limit that. Signed-off-by: Rob Clark <robdclark@chromium.org> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-05-23dma-direct: don't over-decrypt memoryRobin Murphy
The original x86 sev_alloc() only called set_memory_decrypted() on memory returned by alloc_pages_node(), so the page order calculation fell out of that logic. However, the common dma-direct code has several potential allocators, not all of which are guaranteed to round up the underlying allocation to a power-of-two size, so carrying over that calculation for the encryption/decryption size was a mistake. Fix it by rounding to a *number* of pages, rather than an order. Until recently there was an even worse interaction with DMA_DIRECT_REMAP where we could have ended up decrypting part of the next adjacent vmalloc area, only averted by no architecture actually supporting both configs at once. Don't ask how I found that one out... Fixes: c10f07aa27da ("dma/direct: Handle force decryption for DMA coherent buffers in common code") Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: David Rientjes <rientjes@google.com>
2022-05-17swiotlb: max mapping size takes min align mask into accountTianyu Lan
swiotlb_find_slots() skips slots according to io tlb aligned mask calculated from min aligned mask and original physical address offset. This affects max mapping size. The mapping size can't achieve the IO_TLB_SEGSIZE * IO_TLB_SIZE when original offset is non-zero. This will cause system boot up failure in Hyper-V Isolation VM where swiotlb force is enabled. Scsi layer use return value of dma_max_mapping_size() to set max segment size and it finally calls swiotlb_max_mapping_size(). Hyper-V storage driver sets min align mask to 4k - 1. Scsi layer may pass 256k length of request buffer with 0~4k offset and Hyper-V storage driver can't get swiotlb bounce buffer via DMA API. Swiotlb_find_slots() can't find 256k length bounce buffer with offset. Make swiotlb_max_mapping _size() take min align mask into account. Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-05-13swiotlb: use the right nslabs-derived sizes in swiotlb_init_lateChristoph Hellwig
nslabs can shrink when allocations or the remap don't succeed, so make sure to use it for all sizing. For that remove the bytes value that can get stale and replace it with local calculations and a boolean to indicate if the originally requested size could not be allocated. Fixes: 6424e31b1c05 ("swiotlb: remove swiotlb_init_with_tbl and swiotlb_init_late_with_tbl") Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
2022-05-13swiotlb: use the right nslabs value in swiotlb_init_remapChristoph Hellwig
default_nslabs should only be used to initialize nslabs, after that we need to use the local variable that can shrink when allocations or the remap don't succeed. Fixes: 6424e31b1c05 ("swiotlb: remove swiotlb_init_with_tbl and swiotlb_init_late_with_tbl") Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
2022-05-13swiotlb: don't panic when the swiotlb buffer can't be allocatedChristoph Hellwig
For historical reasons the switlb code paniced when the metadata could not be allocated, but just printed a warning when the actual main swiotlb buffer could not be allocated. Restore this somewhat unexpected behavior as changing it caused a boot failure on the Microchip RISC-V PolarFire SoC Icicle kit. Fixes: 6424e31b1c05 ("swiotlb: remove swiotlb_init_with_tbl and swiotlb_init_late_with_tbl") Reported-by: Conor Dooley <Conor.Dooley@microchip.com> Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Acked-by: Conor Dooley <conor.dooley@microchip.com> Tested-by: Conor Dooley <Conor.Dooley@microchip.com>
2022-05-11dma-debug: change allocation mode from GFP_NOWAIT to GFP_ATIOMICMikulas Patocka
We observed the error "cacheline tracking ENOMEM, dma-debug disabled" during a light system load (copying some files). The reason for this error is that the dma_active_cacheline radix tree uses GFP_NOWAIT allocation - so it can't access the emergency memory reserves and it fails as soon as anybody reaches the watermark. This patch changes GFP_NOWAIT to GFP_ATOMIC, so that it can access the emergency memory reserves. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-05-11dma-direct: don't fail on highmem CMA pages in dma_direct_alloc_pagesChristoph Hellwig
When dma_direct_alloc_pages encounters a highmem page it just gives up currently. But what we really should do is to try memory using the page allocator instead - without this platforms with a global highmem CMA pool will fail all dma_alloc_pages allocations. Fixes: efa70f2fdc84 ("dma-mapping: add a new dma_alloc_pages API") Reported-by: Mark O'Neill <mao@tumblingdice.co.uk> Signed-off-by: Christoph Hellwig <hch@lst.de>
2022-04-18swiotlb: remove swiotlb_init_with_tbl and swiotlb_init_late_with_tblChristoph Hellwig
No users left. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2022-04-18swiotlb: provide swiotlb_init variants that remap the bufferChristoph Hellwig
To shared more code between swiotlb and xen-swiotlb, offer a swiotlb_init_remap interface and add a remap callback to swiotlb_init_late that will allow Xen to remap the buffer without duplicating much of the logic. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
2022-04-18swiotlb: pass a gfp_mask argument to swiotlb_init_lateChristoph Hellwig
Let the caller chose a zone to allocate from. This will be used later on by the xen-swiotlb initialization on arm. Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>