summaryrefslogtreecommitdiff
path: root/drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
AgeCommit message (Collapse)Author
2020-03-09drm/amdgpu: remove unused functionsNirmoy Das
AMDGPU statically sets priority for compute queues at initialization so remove all the functions responsible for changing compute queue priority dynamically. Signed-off-by: Nirmoy Das <nirmoy.das@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-03-09drm/amdgpu: set compute queue priority at mqd_initNirmoy Das
We were changing compute ring priority while rings were being used before every job submission which is not recommended. This patch sets compute queue priority at mqd initialization for gfx8, gfx9 and gfx10. Policy: make queue 0 of each pipe as high priority compute queue High/normal priority compute sched lists are generated from set of high/normal priority compute queues. At context creation, entity of compute queue get a sched list from high or normal priority depending on ctx->priority Signed-off-by: Nirmoy Das <nirmoy.das@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Acked-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2020-02-26drm/amdgpu/ring: move debugfs init into core amdgpu debugfsAlex Deucher
In order to remove the load and unload drm callbacks, we need to reorder the init sequence to move all the drm debugfs file handling. Do this for rings. Tested-by: Thomas Zimmermann <tzimmermann@suse.de> Acked-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-07-18drm/amdgpu/: increase AMDGPU_MAX_RINGS to add 2nd vcn instanceJames Zhu
increase AMDGPU_MAX_RINGS to add 2nd vcn instance Signed-off-by: James Zhu <James.Zhu@amd.com> Reviewed-by: Leo Liu <leo.liu@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-06-21drm/amdgpu: add gfx v10 implementation (v10)Hawking Zhang
GFX is the graphics and compute block on the GPU. v1: add initial gfx v10 implementation (Ray) v2: convert to new get_vm_pde function in emit_vm_flush (Hawking) v3: switch to new emit ib interfaces (Hawking) v4: squash in updates (Alex) v5: remove unused variables (Alex) v6: v6: some golden regs moved to vbios (Alex) v7: squash in some cleanups (Alex) v8: squash in golden settings update (Alex) v9: squash in whitespace fixes (Ernst Sjöstrand, Alex) v10: squash in GDS backup size fix and GDS/GWS/OA removal rebase fixes (Hawking) Signed-off-by: Huang Rui <ray.huang@amd.com> Signed-off-by: Hawking Zhang <Hawking.Zhang@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-06-21drm/amdgpu: Add new ring interface preempt_ibRex Zhu
Used to trigger preemtption Acked-by: Hawking Zhang <Hawking.Zhang@amd.com> Signed-off-by: Rex Zhu <Rex.Zhu@amd.com> Signed-off-by: Jack Xiao <Jack.Xiao@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-06-21drm/amdgpu: add the trailing fence per ringJack Xiao
The trailing fence for ring is used to track the completion of preemption. Acked-by: Hawking Zhang <Hawking.Zhang@amd.com> Signed-off-by: Jack Xiao <Jack.Xiao@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-06-21drm/amdgpu: Add helper function amdgpu_ring_set_preempt_cond_execRex Zhu
can preempt the ring by setting cond_exec to false Acked-by: Hawking Zhang <Hawking.Zhang@amd.com> Signed-off-by: Rex Zhu <Rex.Zhu@amd.com> Signed-off-by: Jack Xiao <Jack.Xiao@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-05-24drm/amdgpu: add no_user_fence flag to ring funcsLeo Liu
So we can generalize the no user fence supported engine Signed-off-by: Leo Liu <leo.liu@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2019-01-25drm/amdgpu: add flags to emit_ib interface v2Jack Xiao
Replace the last bool type parameter with a general flags parameter, to make the last parameter be able to contain more information. v2: drop setting need_ctx_switch = false Reviewed-by: Christian König <christian.koenig@amd.com> Reviewed-by: Hawking Zhang <Hawking.Zhang@amd.com> Signed-off-by: Jack Xiao <Jack.Xiao@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-12-14drm/amdgpu: increase the MAX ring numberEvan Quan
As two more SDMA page queue rings are added on Vega20. Signed-off-by: Evan Quan <evan.quan@amd.com> Reviewed-by: Oak Zeng <Oak.Zeng@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-11-05drm/amdgpu: Modify the argument of emit_ib interfaceRex Zhu
use the point of struct amdgpu_job as the function argument instand of vmid, so the other members of struct amdgpu_job can be visit in emit_ib function. v2: add a wrapper for getting the VMID add the job before the ib on the parameter list. v3: refine the wrapper name Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Rex Zhu <Rex.Zhu@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-11-05drm/amdgpu: Retire amdgpu_ring.ready flag v4Andrey Grodzovsky
Start using drm_gpu_scheduler.ready isntead. v3: Add helper function to run ring test and set sched.ready flag status accordingly, clean explicit sched.ready sets from the IP specific files. v4: Add kerneldoc and rebase. Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-09-26drm/amdgpu: Move fence SW fallback warning v3Andrey Grodzovsky
Only print the warning if there was actually some fence processed from the SW fallback timer. v2: Add return value to amdgpu_fence_process to let amdgpu_fence_fallback know fences were actually processed and then print the warning. v3: Always return true if seq != last_seq Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Acked-off-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-09-26Revert "drm/amdgpu: remove fence fallback"Andrey Grodzovsky
This reverts commit 9b0df0937a852d299fbe42a5939c9a8a4cc83c55. This commit breaks KCQ IB test and S3 on Polaris 11. Signed-off-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-09-19drm/amdgpu: remove fence fallbackChristian König
DC doesn't seem to have a fallback path either. So when interrupts doesn't work any more we are pretty much busted no matter what. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-08-27drm/amdgpu: add ring soft recovery v4Christian König
Instead of hammering hard on the GPU try a soft recovery first. v2: reorder code a bit v3: increase timeout to 10ms, increment GPU reset counter v4: squash in compile fix (Christian) Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Huang Rui <ray.huang@amd.com>
2018-08-27drm/amdgpu: remove ring lru handlingChristian König
Not needed any more. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-08-27drm/amdgpu: move ring macros into amdgpu_ring headerHuang Rui
Demangle amdgpu.h. Signed-off-by: Huang Rui <ray.huang@amd.com> Acked-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-07-27drm/amdgpu: add support for inplace IB patching for MM engines v2Christian König
We are going to need that for the second UVD instance on Vega20. v2: rename to patch_cs_in_place Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-and-tested-by: James Zhu <James.Zhu@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-07-16drm/amdgpu: remove ring parameter from amdgpu_job_submitChristian König
We know the ring through the entity anyway. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Junwei Zhang <Jerry.Zhang@amd.com> Acked-by: Chunming Zhou <david1.zhou@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-06-15drm/amdgpu: define and add extra dword for jpeg ringBoyuan Zhang
Define extra dword for jpeg ring. Jpeg ring will allocate extra dword to store the patch commands for fixing the known issue. v2: dropping extra_dw for rings other than jpeg. Signed-off-by: Boyuan Zhang <boyuan.zhang@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-06-15drm/amdgpu: define vcn jpeg ringBoyuan Zhang
Add AMDGPU_RING_TYPE_VCN_JPEG ring define Signed-off-by: Boyuan Zhang <boyuan.zhang@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-05-18drm/amdgpu/vg20:increase 3 rings for AMDGPU_MAX_RINGSJames Zhu
For Vega20, there are two UVD Hardware. One more UVD hardware adds one decode ring and two encode rings. So AMDGPU_MAX_RINGS need increase by 3. Signed-off-by: James Zhu <James.Zhu@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-05-15drm/amdgpu: optionally do a writeback but don't invalidate TC for IB fencesMarek Olšák
There is a new IB flag that enables this new behavior. Full invalidation is unnecessary for RELEASE_MEM and doesn't make sense when draw calls from two adjacent gfx IBs run in parallel. This will be the new default for Mesa. v2: bump the version Signed-off-by: Marek Olšák <marek.olsak@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-05-15drm/amdgpu: add emit_reg_write_reg_wait ring callbackAlex Deucher
This callback writes a value to a register and then reads back another register and waits for a value in a single operation. Provide a helper function using two operations for engines that don't support this opertion. Reviewed-by: Huang Rui <ray.huang@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-02-06drm/amdgpu: Add KFD eviction fenceFelix Kuehling
This fence is used by KFD to keep memory resident while user mode queues are enabled. Trying to evict memory will trigger the enable_signaling callback, which starts a KFD eviction, which involves preempting user mode queues before signaling the fence. There is one such fence per process. v2: * Grab a reference to mm_struct * Dereference fence after NULL check * Simplify fence release, no need to signal without anyone waiting * Added signed-off-by Harish, who is the original author of this code v3: * update MAINTAINERS file * change amd_kfd_ prefix to amdkfd_ * remove useless initialization of variable to NULL v4: * set amdkfd_fence_ops to be static * Suggested by: Fengguang Wu <fengguang.wu@intel.com> Signed-off-by: Harish Kasiviswanathan <Harish.Kasiviswanathan@amd.com> Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: Oded Gabbay <oded.gabbay@gmail.com> Signed-off-by: Oded Gabbay <oded.gabbay@gmail.com>
2018-02-06drm/amdgpu: Fix header file dependenciesFelix Kuehling
Signed-off-by: Felix Kuehling <Felix.Kuehling@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Oded Gabbay <oded.gabbay@gmail.com>
2018-02-19drm/amdgpu: separate PASID mapping from VM flush v2Christian König
Stuffing the PASID mapping into the VM flush isn't flexible enough since the PASID mapping changes not as often as we need a VM flush. v2: add missing use of gmc_v7_0_emit_pasid_mapping Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Felix Kuehling <Felix.Kuehling@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-02-19drm/amdgpu: cache the fence to wait for a VMIDChristian König
Beneficial when a lot of processes are waiting for VMIDs. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-02-19drm/amdgpu: add new emit_reg_wait callbackChristian König
Allows us to wait for a register value/mask on a ring. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Felix Kuehling <felix.kuehling@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-02-19drm/amdgpu: remove now superflous *_hdp operationChristian König
All HDP invalidation and most flush can now be replaced by the generic ASIC function. Signed-off-by: Christian König <christian.koenig@amd.com> Acked-by: Chunming Zhou <david1.zhou@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2018-02-19drm/amdgpu: forward pasid to backend flush implementationsChristian König
rd the pasid from the VM code to the emit_vm_flush function and update all implementations with the new parameter. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2017-12-27drm/amdgpu: rename vm_id to vmidChristian König
sed -i "s/vm_id/vmid/g" drivers/gpu/drm/amd/amdgpu/*.c sed -i "s/vm_id/vmid/g" drivers/gpu/drm/amd/amdgpu/*.h Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2017-12-12drm/amdgpu: use polling mem to set SDMA3 wptr for VFPixel Ding
On Tonga VF, there're 2 sources updating wptr registers for sdma3: 1) polling mem and 2) doorbell. When doorbell and polling mem are both enabled on sdma3, there will be collision hit in occasion between those two sources when ucode and h/w are doing the updating on wptr register in parallel. Issue doesn't happen on CP GFX/Compute since CP drops all doorbell writes when VF is inactive. So enable polling mem and don't use doorbell for SDMA3. Signed-off-by: Pixel Ding <Pixel.Ding@amd.com> Reviewed-by: Monk Liu <monk.liu@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2017-12-07drm: move amd_gpu_scheduler into common locationLucas Stach
This moves and renames the AMDGPU scheduler to a common location in DRM in order to facilitate re-use by other drivers. This is mostly a straight forward rename with no code changes. One notable exception is the function to_drm_sched_fence(), which is no longer a inline header function to avoid the need to export the drm_sched_fence_ops_scheduled and drm_sched_fence_ops_finished structures. Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Tested-by: Dieter Nützel <Dieter@nuetzel-hh.de> Acked-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Lucas Stach <l.stach@pengutronix.de> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2017-12-04drm/amdgpu:cleanup force_completionMonk Liu
cleanups, now only operate on the given ring Signed-off-by: Monk Liu <Monk.Liu@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2017-10-19drm/amdgpu: busywait KIQ register accessing (v4)pding
Register accessing is performed when IRQ is disabled. Never sleep in this function. Known issue: dead sleep in many use cases of index/data registers. v2: - wrap polling fence functions. - don't trigger IRQ for polling in case of wrongly fence signal. v3: - handle wrap round gracefully. - add comments for polling function v4: - don't return negative timeout confused with error code Signed-off-by: pding <Pixel.Ding@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2017-10-09drm/amdgpu: add framework for HW specific priority settings v9Andres Rodriguez
Add an initial framework for changing the HW priorities of rings. The framework allows requesting priority changes for the lifetime of an amdgpu_job. After the job completes the priority will decay to the next lowest priority for which a request is still valid. A new ring function set_priority() can now be populated to take care of the HW specific programming sequence for priority changes. v2: set priority before emitting IB, and take a ref on amdgpu_job v3: use AMD_SCHED_PRIORITY_* instead of AMDGPU_CTX_PRIORITY_* v4: plug amdgpu_ring_restore_priority_cb into amdgpu_job_free_cb v5: use atomic for tracking job priorities instead of last_job v6: rename amdgpu_ring_priority_[get/put]() and align parameters v7: replace spinlocks with mutexes for KIQ compatibility v8: raise ring priority during cs_ioctl, instead of job_run v9: priority_get() before push_job() Reviewed-by: Christian König <christian.koenig@amd.com> Acked-by: Christian König <christian.koenig@amd.com> Signed-off-by: Andres Rodriguez <andresx7@gmail.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2017-09-28drm/amdgpu: map compute rings by least recently used pipeAndres Rodriguez
This patch provides a guarantee that the first n queues allocated by an application will be on different pipes. Where n is the number of pipes available from the hardware. This helps avoid ring aliasing which can result in work executing in time-sliced mode instead of truly parallel mode. Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Andres Rodriguez <andresx7@gmail.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2017-07-14drm/amdgpu: fix amdgpu_ring_write_multipleChristian König
Overwriting still used ring content has a low probability to cause problems, not writing at all has 100% probability to cause problems. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Acked-by: Felix Kuehling <Felix.Kuehling@amd.com>
2017-07-14drm/amdgpu: move ring helpers to amdgpu_ring.hChristian König
Keep them where they belong. Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Acked-by: Felix Kuehling <Felix.Kuehling@amd.com>
2017-06-01drm/amdgpu: Move compute vm bug logic to amdgpu_vm.cAlex Xie
In review, Christian would like to keep the logic inside amdgpu_vm.c with a cost of slightly slower. The loop is still optimized out with this patch. v2: remove the if statement. Now it is not slower. Signed-off-by: Alex Xie <AlexBin.Xie@amd.com> Reviewed-by: Christian König <christian.koeng@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2017-05-31drm/amdgpu: guarantee bijective mapping of ring ids for LRU v3Andres Rodriguez
Depending on usage patterns, the current LRU policy may create a non-injective mapping between userspace ring ids and kernel rings. This behaviour is undesired as apps that attempt to fill all HW blocks would be unable to reach some of them. This change forces the LRU policy to create bijective mappings only. v2: compress ring_blacklist v3: simplify amdgpu_ring_is_blacklisted() logic Signed-off-by: Andres Rodriguez <andresx7@gmail.com> Reviewed-by: Nicolai Hähnle <nicolai.haehnle@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2017-05-31drm/amdgpu: implement lru amdgpu_queue_mgr policy for compute v4Andres Rodriguez
Use an LRU policy to map usermode rings to HW compute queues. Most compute clients use one queue, and usually the first queue available. This results in poor pipe/queue work distribution when multiple compute apps are running. In most cases pipe 0 queue 0 is the only queue that gets used. In order to better distribute work across multiple HW queues, we adopt a policy to map the usermode ring ids to the LRU HW queue. This fixes a large majority of multi-app compute workloads sharing the same HW queue, even though 7 other queues are available. v2: use ring->funcs->type instead of ring->hw_ip v3: remove amdgpu_queue_mapper_funcs v4: change ring_lru_list_lock to spinlock, grab only once in lru_get() Signed-off-by: Andres Rodriguez <andresx7@gmail.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2017-05-31drm/amdgpu: Optimize a function called by every IB shedulingAlex Xie
Move several if statements and a loop statment from run time to initialization time. Signed-off-by: Alex Xie <AlexBin.Xie@amd.com> Reviewed-by: Chunming Zhou <david1.zhou@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2017-05-24drm/amdgpu: add vcn enc ring type and functionsLeo Liu
Add the ring function callbacks for the encode rings. Signed-off-by: Leo Liu <leo.liu@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2017-05-24drm/amdgpu: add a ring func for vcn start commandLeo Liu
Needed for the proper command sequence for VCN. Signed-off-by: Leo Liu <leo.liu@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2017-05-24drm/amdgpu: add vcn decode ring type and functionsLeo Liu
Add the ring function callbacks for the decode ring. Signed-off-by: Leo Liu <leo.liu@amd.com> Acked-by: Chunming Zhou <david1.zhou@amd.com> Acked-by: Hawking Zhang <Hawking.Zhang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
2017-05-24drm/amdgpu/SRIOV:implement guilty job TDR for(V2)Monk Liu
1,TDR will kickout guilty job if it hang exceed the threshold of the given one from kernel paramter "job_hang_limit", that way a bad command stream will not infinitly cause GPU hang. by default this threshold is 1 so a job will be kicked out after it hang. 2,if a job timeout TDR routine will not reset all sched/ring, instead if will only reset on the givn one which is indicated by @job of amdgpu_sriov_gpu_reset, that way we don't need to reset and recover each sched/ring if we already know which job cause GPU hang. 3,unblock sriov_gpu_reset for AI family. V2: 1:put kickout guilty job after sched parked. 2:since parking scheduler prior to kickout already occupies a while, we can do last check on the in question job before doing hw_reset. TODO: 1:when a job is considered as guilty, we should mark some flag in its fence status flag, and let UMD side aware that this fence signaling is not due to job complete but job hang. 2:if gpu reset cause all video memory lost, we need introduce a new policy to implement TDR, like drop all jobs not yet signaled, and all IOCTL on this device will return ERROR DEVICE_LOST. this will be implemented later. Signed-off-by: Monk Liu <Monk.Liu@amd.com> Reviewed-by: Christian König <christian.koenig@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com>