Age | Commit message (Collapse) | Author |
|
Etnaviv assumes that GPU page size is 4KiB, however, GPUVA ranges collision
when using softpin capable GPUs on a non 4KiB CPU page size configuration.
The root cause is that kernel side BO takes up bigger address space than
userspace expect, the size of backing memory of GEM buffer objects are
required to align to the CPU PAGE_SIZE. Therefore, results in userspace
allocated GPUVA range fails to be inserted to the specified hole exactly.
To solve this problem, record the GPU visiable size of a BO firstly, then
map and unmap the SG entry strictly with respect to the total GPUVA size.
Signed-off-by: Sui Jingfeng <sui.jingfeng@linux.dev>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
Currently, the calling of mutex_destroy() is ignored on error handling
code path. It is safe for now, since mutex_destroy() actually does
nothing in non-debug builds. But the mutex_destroy() is used to mark
the mutex uninitialized on debug builds, and any subsequent use of the
mutex is forbidden.
It also could lead to problems if mutex_destroy() gets extended, add
missing mutex_destroy() to eliminate potential concerns.
Reviewed-by: Christian Gmeiner <cgmeiner@igalia.com>
Signed-off-by: Sui Jingfeng <sui.jingfeng@linux.dev>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
The 'len' parameter is the 4th argument, because it is not get used, so
drop it. No functional change.
Signed-off-by: Sui Jingfeng <suijingfeng@loongson.cn>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
Now that it is only used to track the driver internal state of
the MMU global and cmdbuf objects, we can get rid of this property
by making the free/finit functions of those objects safe to call
on an uninitialized object.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <cgmeiner@igalia.com>
|
|
While the interface for the MMU mapping takes phys_addr_t to hold a
full 64bit address when necessary and MMUv2 is able to map physical
addresses with up to 40bit, etnaviv_iommu_map() truncates the address
to 32bits. Fix this by using the correct type.
Fixes: 931e97f3afd8 ("drm/etnaviv: mmuv2: support 40 bit phys address")
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
|
|
The same logic is already used in two different places and now
it will also be needed outside of the compilation unit, so split
it into a separate function.
Cc: stable@vger.kernel.org # 5.19
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Guido Günther <agx@sigxcpu.org>
|
|
Right now the only point where softpin mappings get removed from the
MMU context is when the mapped GEM object is destroyed. However,
userspace might want to reuse that address space before the object
is destroyed, which is a valid usage, as long as all mapping in that
region of the address space are no longer used by any GPU jobs.
Implement reaping of idle MMU mappings that would otherwise
prevent the insertion of a softpin mapping.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Tested-by: Guido Günther <agx@sigxcpu.org>
Acked-by: Guido Günther <agx@sigxcpu.org>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
|
|
The flush sequence is a marker that the page tables have been changed
and any affected TLBs need to be flushed. Move the flush_seq increment
a little further down the call stack to place it next to the actual
page table manipulation. Not functional change.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Tested-by: Guido Günther <agx@sigxcpu.org>
Acked-by: Guido Günther <agx@sigxcpu.org>
|
|
This makes it a little more clear that the mapping holds a reference
to the context once the buffer has been successfully mapped into that
context and simplifies the error handling a bit.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Tested-by: Guido Günther <agx@sigxcpu.org>
Acked-by: Guido Günther <agx@sigxcpu.org>
|
|
When the mapping is already reaped the unmap must be a no-op, as we
would otherwise try to remove the mapping twice, corrupting the involved
data structures.
Cc: stable@vger.kernel.org # 5.4
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Tested-by: Guido Günther <agx@sigxcpu.org>
Acked-by: Guido Günther <agx@sigxcpu.org>
|
|
When we forcefully evict a mapping from the the address space and thus the
MMU context, the MMU context is leaked, as the mapping no longer points to
it, so it doesn't get freed when the GEM object is destroyed. Add the
mssing context put to fix the leak.
Cc: stable@vger.kernel.org # 5.4
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Tested-by: Michael Walle <michael@walle.cc>
Tested-by: Marek Vasut <marex@denx.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
|
|
drm-next
Again, nothing big this time. Mostly a new performance counter from
Christian, some more lockdep annotations from Guido and removal of
functionality that duplicates driver core from Robin.
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Lucas Stach <l.stach@pengutronix.de>
Link: https://patchwork.freedesktop.org/patch/msgid/81367a99b8949584e5becd334ac001b9ad3dc37a.camel@pengutronix.de
|
|
etnaviv_iommu_find_iova has it so etnaviv_iommu_insert_exact and
lockdep_assert_held should have it as well.
Signed-off-by: Guido Günther <agx@sigxcpu.org>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
The Documentation/DMA-API-HOWTO.txt states that the dma_map_sg() function
returns the number of the created entries in the DMA address space.
However the subsequent calls to the dma_sync_sg_for_{device,cpu}() and
dma_unmap_sg must be called with the original number of the entries
passed to the dma_map_sg().
struct sg_table is a common structure used for describing a non-contiguous
memory buffer, used commonly in the DRM and graphics subsystems. It
consists of a scatterlist with memory pages and DMA addresses (sgl entry),
as well as the number of scatterlist entries: CPU pages (orig_nents entry)
and DMA mapped pages (nents entry).
It turned out that it was a common mistake to misuse nents and orig_nents
entries, calling DMA-mapping functions with a wrong number of entries or
ignoring the number of mapped entries returned by the dma_map_sg()
function.
To avoid such issues, lets use a common dma-mapping wrappers operating
directly on the struct sg_table objects and use scatterlist page
iterators where possible. This, almost always, hides references to the
nents and orig_nents entries, making the code robust, easier to follow
and copy/paste safe.
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Reviewed-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Lucas Stach <l.stach@pengutronix.de>
|
|
The switch to per-process address spaces erroneously dropped the check
which validated that the command buffer is mapped through the linear
apperture as required by the hardware. This turned a system
misconfiguration with a helpful error message into a very hard to
debug issue. Reinstate the check at the appropriate location.
Fixes: 17e4660ae3d7 (drm/etnaviv: implement per-process address spaces on MMUv2)
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Guido Günther <agx@sigxcpu.org>
|
|
Allow the mapping code to request a specific virtual address for the gem
mapping. If the virtual address is zero we fall back to the old mode of
allocating a virtual address for the mapping.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Guido Günther <agx@sigxcpu.org>
|
|
This builds on top of the MMU contexts introduced earlier. Instead of having
one context per GPU core, each GPU client receives its own context.
On MMUv1 this still means a single shared pagetable set is used by all
clients, but on MMUv2 there is now a distinct set of pagetables for each
client. As the command fetch is also translated via the MMU on MMUv2 the
kernel command ringbuffer is mapped into each of the client pagetables.
As the MMU context switch is a bit of a heavy operation, due to the needed
cache and TLB flushing, this patch implements a lazy way of switching the
MMU context. The kernel does not have its own MMU context, but reuses the
last client context for all of its operations. This has some visible impact,
as the GPU can now only be started once a client has submitted some work and
we got the client MMU context assigned. Also the MMU context has a different
lifetime than the general client context, as the GPU might still execute the
kernel command buffer in the context of a client even after the client has
completed all GPU work and has been terminated. Only when the GPU is runtime
suspended or switches to another clients MMU context is the old context
freed up.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Guido Günther <agx@sigxcpu.org>
|
|
This reworks the MMU handling to make it possible to have multiple MMU contexts.
A context is basically one instance of GPU page tables. Currently we have one
set of page tables per GPU, which isn't all that clever, as it has the
following two consequences:
1. All GPU clients (aka processes) are sharing the same pagetables, which means
there is no isolation between clients, but only between GPU assigned memory
spaces and the rest of the system. Better than nothing, but also not great.
2. Clients operating on the same set of buffers with different etnaviv GPU
cores, e.g. a workload using both the 2D and 3D GPU, need to map the used
buffers into the pagetable sets of each used GPU.
This patch reworks all the MMU handling to introduce the abstraction of the
MMU context. A context can be shared across different GPU cores, as long as
they have compatible MMU implementations, which is the case for all systems
with Vivante GPUs seen in the wild.
As MMUv1 is not able to change pagetables on the fly, without a
"stop the world" operation, which stops GPU, changes pagetables via CPU
interaction, restarts GPU, the implementation introduces a shared context on
MMUv1, which is returned whenever there is a request for a new context.
This patch assigns a MMU context to each GPU, so on MMUv2 systems there is
still one set of pagetables per GPU, but due to the shared context MMUv1
systems see a change in behavior as now a single pagetable set is used
across all GPU cores.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Guido Günther <agx@sigxcpu.org>
|
|
If a MMU is shared between multiple GPUs, all of them need to flush their
TLBs, so a single marker that gets reset on the first flush won't do.
Replace the flush marker with a sequence number, so that it's possible to
check if the TLB is in sync with the current page table state for each GPU.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Guido Günther <agx@sigxcpu.org>
|
|
This allows to decouple the cmdbuf suballocator create and mapping
the region into the GPU address space. Allowing multiple AS to share
a single cmdbuf suballoc.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-by: Guido Günther <agx@sigxcpu.org>
|
|
Drop use of the deprecated drmP.h header file.
Fix fallout in all .c files.
The etnaviv_drv.h header file was made self-contained,
and missing includes was then added to the .c files that needed them.
In a few cases the list of include files was sorted.
Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Cc: Lucas Stach <l.stach@pengutronix.de>
Cc: Russell King <linux+etnaviv@armlinux.org.uk>
Cc: Christian Gmeiner <christian.gmeiner@gmail.com>
Cc: David Airlie <airlied@linux.ie>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: etnaviv@lists.freedesktop.org
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
This replaces the repetitive GPL-2.0 license text in code and header files
with the SPDX tags. Generated hardware headers aren't changed, as any changes
there need to be done in the upstream rnndb repository.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
|
|
This was useful on MMUv1 GPUs, which don't generate proper faults,
when the GPU write caches weren't fully understood and not properly
handled by the kernel driver. As this has been fixed for quite some
time, the cycling though the MMU address space needlessly spreads
out the MMU mappings.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
Add a jump target so that a call of the function "mutex_unlock" is stored
only once at the end of this function implementation.
Replace three calls by goto statements.
This issue was detected by using the Coccinelle software.
Signed-off-by: Markus Elfring <elfring@users.sourceforge.net>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
Using the IOMMU API to manage the internal GPU MMU has been an
historical accident and it keeps getting in the way, as well as
entangling the driver with the inner workings of the IOMMU
subsystem.
Clean this up by removing the usage of iommu_domain, which is the
last piece linking etnaviv to the IOMMU subsystem.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
And clean up the header file a bit.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-By: Wladimir J. van der Laan <laanwj@gmail.com>
|
|
This is a preparation to remove the etnaviv dependency on the IOMMU
subsystem by importing the relevant parts of the iommu map/unamp
functions into the driver.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-By: Wladimir J. van der Laan <laanwj@gmail.com>
|
|
The handler has never been used, so it's really just dead code.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Philipp Zabel <p.zabel@pengutronix.de>
Reviewed-By: Wladimir J. van der Laan <laanwj@gmail.com>
|
|
The drm_mm range manager claimed to support top-down insertion, but it
was neither searching for the top-most hole that could fit the
allocation request nor fitting the request to the hole correctly.
In order to search the range efficiently, we create a secondary index
for the holes using either their size or their address. This index
allows us to find the smallest hole or the hole at the bottom or top of
the range efficiently, whilst keeping the hole stack to rapidly service
evictions.
v2: Search for holes both high and low. Rename flags to mode.
v3: Discover rb_entry_safe() and use it!
v4: Kerneldoc for enum drm_mm_insert_mode.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: "Christian König" <christian.koenig@amd.com>
Cc: David Airlie <airlied@linux.ie>
Cc: Russell King <rmk+kernel@armlinux.org.uk>
Cc: Daniel Vetter <daniel.vetter@intel.com>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Sean Paul <seanpaul@chromium.org>
Cc: Lucas Stach <l.stach@pengutronix.de>
Cc: Christian Gmeiner <christian.gmeiner@gmail.com>
Cc: Rob Clark <robdclark@gmail.com>
Cc: Thierry Reding <thierry.reding@gmail.com>
Cc: Stephen Warren <swarren@wwwdotorg.org>
Cc: Alexandre Courbot <gnurou@gmail.com>
Cc: Eric Anholt <eric@anholt.net>
Cc: Sinclair Yeh <syeh@vmware.com>
Cc: Thomas Hellstrom <thellstrom@vmware.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Sinclair Yeh <syeh@vmware.com> # vmwgfx
Reviewed-by: Lucas Stach <l.stach@pengutronix.de> #etnaviv
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/20170202210438.28702-1-chris@chris-wilson.co.uk
|
|
into drm-next
It includes code cleanups from Bhumika and Liviu, a significant shader
performance fix and additions to the cmdstream validator from Wladimir
and the addition of a cmdbuf suballocator by myself.
The suballocator improves performance on all chips by reducing the CPU
overhead of the kernel driver and side steps the GC3000 FE MMU flush
erratum, now making the workarounds in IOVA allocation we had before
unnecessary, which results in a nice cleanup of the code in that area.
* 'drm-etnaviv-next' of https://git.pengutronix.de/git/lst/linux:
drm/etnaviv: Remove duplicate header file include
Revert "drm/etnaviv: trick drm_mm into giving out a low IOVA"
drm/etnaviv: add cmdbuf suballocator
drm/etnaviv: get cmdbuf physical address through the cmdbuf abstraction
drm/etnaviv: wire up iova handling in new cmdbuf abstraction
drm/etnaviv: move cmdbuf de-/allocation into own file
drm/etnaviv: always flush MMU TLBs on map/unmap
drm/etnaviv: constify etnaviv_iommu_ops structures
drm/etnaviv: set up initial PULSE_EATER register
drm/etnaviv: add new GC3000 sensitive states
|
|
Now that commandstreams are handled through the cmdbuf suballocator
the workaround to make the IOVA games work is not needed anymore.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
|
|
There are 3 big benefits to suballocating a single big DMA buffer
for command submission:
1. Avoid hammering CMA. The old way of allocating and freeing a DMA
buffer for each submission was hitting some of the real slow
pathes in CMA, as this allocator was not designed for a concurrent
small buffers load.
2. Less TLB flushes on IOMMUv2. If a new command buffer is mapped into
the GPU address space the MMU TLBs need to be flushed. By having
one big buffer statically mapped to the GPU, a lot of those flushes
can be avoided.
3. No funky workarounds for GC3000. The FE TLB flush on GC3000 isn't
reliable. To work around that we tried to lay out the cmdbufs in
the GPU address space in a way to avoid this issue. This hasn't
always worked if the address space is crowded. A single statically
mapped buffer avoids the erratum completely.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
|
|
This will get more complex with the following changes, so move it
into its own place.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
|
|
This ensures that the GPU isn't able to write into already freed
objects, as doing this in the IOVA reaper isn't enough, as the
gem_free_object path will also cause unmaps to happen.
On MMUv2 this also ensures that stale entries, which may have
been prefetched into the TLB will be purged.
The flush is low overhead, as it gets batched up with the next
user command buffer, so this isn't incuring an overhead for
each buffer map/unmap.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux into drm-next
Backmerge Linus master to get the connector locking revert.
* 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux: (645 commits)
sysctl: fix proc_doulongvec_ms_jiffies_minmax()
Revert "drm/probe-helpers: Drop locking from poll_enable"
MAINTAINERS: add Dan Streetman to zbud maintainers
MAINTAINERS: add Dan Streetman to zswap maintainers
mm: do not export ioremap_page_range symbol for external module
mn10300: fix build error of missing fpu_save()
romfs: use different way to generate fsid for BLOCK or MTD
frv: add missing atomic64 operations
mm, page_alloc: fix premature OOM when racing with cpuset mems update
mm, page_alloc: move cpuset seqcount checking to slowpath
mm, page_alloc: fix fast-path race with cpuset update or removal
mm, page_alloc: fix check for NULL preferred_zone
kernel/panic.c: add missing \n
fbdev: color map copying bounds checking
frv: add atomic64_add_unless()
mm/mempolicy.c: do not put mempolicy before using its nodemask
radix-tree: fix private list warnings
Documentation/filesystems/proc.txt: add VmPin
mm, memcg: do not retry precharge charges
proc: add a schedule point in proc_pid_readdir()
...
|
|
After rollover of the IOVA space, we want to get a low IOVA address,
otherwise the the games we play by remembering the last IOVA are
pointless. When we search for a free hole with DRM_MM_SEARCH_DEFAULT,
drm_mm will pop the next entry from the free holes stack, which will
likely be a high IOVA. By using DRM_MM_SEARCH_BELOW we can trick
drm_mm into reversing the search and provide us with a low IOVA.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Wladimir van der Laan <laanwj@gmail.com>
|
|
Compute the minimal required hole during scan and only evict those nodes
that overlap. This enables us to reduce the number of nodes we need to
evict to the bare minimum.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/20161222083641.2691-31-chris@chris-wilson.co.uk
|
|
The scan state occupies a large proportion of the struct drm_mm and is
rarely used and only contains temporary state. That makes it suitable to
moving to its struct and onto the stack of the callers.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
[danvet: Fix up etnaviv to compile, was missing a BUG_ON.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|
|
To make sure we don't place anything there which might confuse
the FE prefetcher. This gets rid of another case of FE MMU faults
when the address space gets crowded before triggering the reaper.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
At least on the GC3000 the FE MMU is not properly flushing stale TLB
entries. Make sure to map the cmdbufs with a big enough spacing in
the IOVAs to not hit old/prefetched TLB entries when jumping to a
newly mapped cmdbuf.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
All other parts are now in place, so implement the actual translation
step and hook it up, so the driver claims support for cores with
the new MMU.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
With MMUv2 all buffers need to be mapped through the MMU once it
is enabled. Align the buffer size to 4K, as the MMU is only able to
map page aligned buffers.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
With MMUv2 the command buffers need to be mapped through the MMU.
Split out the iova search and MMU reaping logic so it can be reused
for the cmdbuf mapping, where no GEM object is involved.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
The GPU virtual address for the command buffers differs depending on
the IOMMU version. Move the calculation of the iova into etnaviv
mmu, to enable proper dispatch.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
The GPU code doesn't need to deal with the IOMMU directly, instead
it can all be hidden behind the etnaviv mmu interface. Move the
last remaining part into etnaviv mmu.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
So we can call the v2 restore code once it is there.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
As the comment above the code states, the linear window is only
available on MMUv1. Don't try to use it on MMUv2.
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
Currently, we scan the list of mappings each time we want to operate on
the vram_mapping struct. Rather than repeatedly scanning these, look
them up once in the submission path, and then use _reference and
_unreference methods as necessary to manage this object.
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
|
|
This adds the etnaviv DRM driver and hooks it up in Makefiles
and Kconfig.
Signed-off-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|