summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2016-09-30sched/irqtime: No need for preempt-safe accessorsFrederic Weisbecker
We can safely use the preempt-unsafe accessors for irqtime when we flush its counters to kcpustat as IRQs are disabled at this time. Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Reviewed-by: Rik van Riel <riel@redhat.com> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Wanpeng Li <wanpeng.li@hotmail.com> Link: http://lkml.kernel.org/r/1474849761-12678-2-git-send-email-fweisbec@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-09-30sched/fair: Fix min_vruntime trackingPeter Zijlstra
While going through enqueue/dequeue to review the movement of set_curr_task() I noticed that the (2nd) update_min_vruntime() call in dequeue_entity() is suspect. It turns out, its actually wrong because it will consider cfs_rq->curr, which could be the entry we just normalized. This mixes different vruntime forms and leads to fail. The purpose of the second update_min_vruntime() is to move min_vruntime forward if the entity we just removed is the one that was holding it back; _except_ for the DEQUEUE_SAVE case, because then we know its a temporary removal and it will come back. However, since we do put_prev_task() _after_ dequeue(), cfs_rq->curr will still be set (and per the above, can be tranformed into a different unit), so update_min_vruntime() should also consider curr->on_rq. This also fixes another corner case where the enqueue (which also does update_curr()->update_min_vruntime()) happens on the rq->lock break in schedule(), between dequeue and put_prev_task. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Fixes: 1e876231785d ("sched: Fix ->min_vruntime calculation in dequeue_entity()") Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-09-30sched/debug: Add SCHED_WARN_ON()Peter Zijlstra
Provide SCHED_WARN_ON as wrapper for WARN_ON_ONCE() to avoid CONFIG_SCHED_DEBUG wrappery. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-09-30sched/core: Fix set_user_nice()Peter Zijlstra
Almost all scheduler functions update state with the following pattern: if (queued) dequeue_task(rq, p, DEQUEUE_SAVE); if (running) put_prev_task(rq, p); /* update state */ if (queued) enqueue_task(rq, p, ENQUEUE_RESTORE); if (running) set_curr_task(rq, p); set_user_nice() however misses the running part, cure this. This was found by asserting we never enqueue 'current'. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-09-30sched/fair: Introduce set_curr_task() helperPeter Zijlstra
Now that the ia64 only set_curr_task() symbol is gone, provide a helper just like put_prev_task(). Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-09-30sched/core, ia64: Rename set_curr_task()Peter Zijlstra
Rename the ia64 only set_curr_task() function to free up the name. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Tony Luck <tony.luck@intel.com> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-09-30sched/core: Fix incorrect utilization accounting when switching to fair classVincent Guittot
When a task switches to fair scheduling class, the period between now and the last update of its utilization is accounted as running time whatever happened during this period. This incorrect accounting applies to the task and also to the task group branch. When changing the property of a running task like its list of allowed CPUs or its scheduling class, we follow the sequence: - dequeue task - put task - change the property - set task as current task - enqueue task The end of the sequence doesn't follow the normal sequence (as per __schedule()) which is: - enqueue a task - then set the task as current task. This incorrectordering is the root cause of incorrect utilization accounting. Update the sequence to follow the right one: - dequeue task - put task - change the property - enqueue task - set task as current task Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Morten.Rasmussen@arm.com Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bsegall@google.com Cc: dietmar.eggemann@arm.com Cc: linaro-kernel@lists.linaro.org Cc: pjt@google.com Cc: yuyang.du@intel.com Link: http://lkml.kernel.org/r/1473666472-13749-8-git-send-email-vincent.guittot@linaro.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-09-30sched/core: Optimize SCHED_SMTPeter Zijlstra
Avoid pointless SCHED_SMT code when running on !SMT hardware. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-09-30sched/core: Rewrite and improve select_idle_siblings()Peter Zijlstra
select_idle_siblings() is a known pain point for a number of workloads; it either does too much or not enough and sometimes just does plain wrong. This rewrite attempts to address a number of issues (but sadly not all). The current code does an unconditional sched_domain iteration; with the intent of finding an idle core (on SMT hardware). The problems which this patch tries to address are: - its pointless to look for idle cores if the machine is real busy; at which point you're just wasting cycles. - it's behaviour is inconsistent between SMT and !SMT hardware in that !SMT hardware ends up doing a scan for any idle CPU in the LLC domain, while SMT hardware does a scan for idle cores and if that fails, falls back to a scan for idle threads on the 'target' core. The new code replaces the sched_domain scan with 3 explicit scans: 1) search for an idle core in the LLC 2) search for an idle CPU in the LLC 3) search for an idle thread in the 'target' core where 1 and 3 are conditional on SMT support and 1 and 2 have runtime heuristics to skip the step. Step 1) is conditional on sd_llc_shared->has_idle_cores; when a cpu goes idle and sd_llc_shared->has_idle_cores is false, we scan all SMT siblings of the CPU going idle. Similarly, we clear sd_llc_shared->has_idle_cores when we fail to find an idle core. Step 2) tracks the average cost of the scan and compares this to the average idle time guestimate for the CPU doing the wakeup. There is a significant fudge factor involved to deal with the variability of the averages. Esp. hackbench was sensitive to this. Step 3) is unconditional; we assume (also per step 1) that scanning all SMT siblings in a core is 'cheap'. With this; SMT systems gain step 2, which cures a few benchmarks -- notably one from Facebook. One 'feature' of the sched_domain iteration, which we preserve in the new code, is that it would start scanning from the 'target' CPU, instead of scanning the cpumask in cpu id order. This avoids multiple CPUs in the LLC scanning for idle to gang up and find the same CPU quite as much. The down side is that tasks can end up hopping across the LLC for no apparent reason. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-09-30x86/cmpxchg, locking/atomics: Remove superfluous definitionsNikolay Borisov
cmpxchg contained definitions for unused (x)add_* operations, dating back to the original ticket spinlock implementation. Nowadays these are unused so remove them. Signed-off-by: Nikolay Borisov <n.borisov.lkml@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: hpa@zytor.com Link: http://lkml.kernel.org/r/1474913478-17757-1-git-send-email-n.borisov.lkml@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-09-30x86, locking/spinlocks: Remove ticket (spin)lock implementationPeter Zijlstra
We've unconditionally used the queued spinlock for many releases now. Its time to remove the old ticket lock code. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Waiman Long <waiman.long@hpe.com> Cc: Waiman.Long@hpe.com Cc: david.vrabel@citrix.com Cc: dhowells@redhat.com Cc: pbonzini@redhat.com Cc: xen-devel@lists.xenproject.org Link: http://lkml.kernel.org/r/20160518184302.GO3193@twins.programming.kicks-ass.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-09-30Merge branch 'linus' into locking/core, to pick up fixesIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-09-30sched/core: Replace sd_busy/nr_busy_cpus with sched_domain_sharedPeter Zijlstra
Move the nr_busy_cpus thing from its hacky sd->parent->groups->sgc location into the much more natural sched_domain_shared location. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-09-30sched/core: Introduce 'struct sched_domain_shared'Peter Zijlstra
Since struct sched_domain is strictly per cpu; introduce a structure that is shared between all 'identical' sched_domains. Limit to SD_SHARE_PKG_RESOURCES domains for now, as we'll only use it for shared cache state; if another use comes up later we can easily relax this. While the sched_group's are normally shared between CPUs, these are not natural to use when we need some shared state on a domain level -- since that would require the domain to have a parent, which is not a given. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-09-30sched/core: Restructure destroy_sched_domain()Peter Zijlstra
There is no point in doing a call_rcu() for each domain, only do a callback for the root sched domain and clean up the entire set in one go. Also make the entire call chain be called destroy_sched_domain*() to remove confusion with the free_sched_domains() call, which does an entirely different thing. Both cpu_attach_domain() callers of destroy_sched_domain() can live without the call_rcu() because at those points the sched_domain hasn't been published yet. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-09-30sched/core: Remove unused @cpu argument from destroy_sched_domain*()Peter Zijlstra
Small cleanup; nothing uses the @cpu argument so make it go away. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-09-30sched/wait: Introduce init_wait_entry()Oleg Nesterov
The partial initialization of wait_queue_t in prepare_to_wait_event() looks ugly. This was done to shrink .text, but we can simply add the new helper which does the full initialization and shrink the compiled code a bit more. And. This way prepare_to_wait_event() can have more users. In particular we are ready to remove the signal_pending_state() checks from wait_bit_action_f helpers and change __wait_on_bit_lock() to use prepare_to_wait_event(). Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Al Viro <viro@ZenIV.linux.org.uk> Cc: Bart Van Assche <bvanassche@acm.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Neil Brown <neilb@suse.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20160906140055.GA6167@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-09-30sched/wait: Avoid abort_exclusive_wait() in __wait_on_bit_lock()Oleg Nesterov
__wait_on_bit_lock() doesn't need abort_exclusive_wait() too. Right now it can't use prepare_to_wait_event() (see the next change), but it can do the additional finish_wait() if action() fails. abort_exclusive_wait() no longer has callers, remove it. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Al Viro <viro@ZenIV.linux.org.uk> Cc: Bart Van Assche <bvanassche@acm.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Neil Brown <neilb@suse.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20160906140053.GA6164@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-09-30sched/wait: Avoid abort_exclusive_wait() in ___wait_event()Oleg Nesterov
___wait_event() doesn't really need abort_exclusive_wait(), we can simply change prepare_to_wait_event() to remove the waiter from q->task_list if it was interrupted. This simplifies the code/logic, and this way prepare_to_wait_event() can have more users, see the next change. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Al Viro <viro@ZenIV.linux.org.uk> Cc: Bart Van Assche <bvanassche@acm.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Neil Brown <neilb@suse.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20160908164815.GA18801@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org> -- include/linux/wait.h | 7 +------ kernel/sched/wait.c | 35 +++++++++++++++++++++++++---------- 2 files changed, 26 insertions(+), 16 deletions(-)
2016-09-30sched/wait: Fix abort_exclusive_wait(), it should pass TASK_NORMAL to wake_up()Oleg Nesterov
Otherwise this logic only works if mode is "compatible" with another exclusive waiter. If some wq has both TASK_INTERRUPTIBLE and TASK_UNINTERRUPTIBLE waiters, abort_exclusive_wait() won't wait an uninterruptible waiter. The main user is __wait_on_bit_lock() and currently it is fine but only because TASK_KILLABLE includes TASK_UNINTERRUPTIBLE and we do not have lock_page_interruptible() yet. Just use TASK_NORMAL and remove the "mode" arg from abort_exclusive_wait(). Yes, this means that (say) wake_up_interruptible() can wake up the non- interruptible waiter(s), but I think this is fine. And in fact I think that abort_exclusive_wait() must die, see the next change. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Al Viro <viro@ZenIV.linux.org.uk> Cc: Bart Van Assche <bvanassche@acm.org> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Neil Brown <neilb@suse.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20160906140047.GA6157@redhat.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-09-30sched/fair: Fix fixed point arithmetic width for shares and effective loadDietmar Eggemann
Since commit: 2159197d6677 ("sched/core: Enable increased load resolution on 64-bit kernels") we now have two different fixed point units for load: - 'shares' in calc_cfs_shares() has 20 bit fixed point unit on 64-bit kernels. Therefore use scale_load() on MIN_SHARES. - 'wl' in effective_load() has 10 bit fixed point unit. Therefore use scale_load_down() on tg->shares which has 20 bit fixed point unit on 64-bit kernels. Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/1471874441-24701-1-git-send-email-dietmar.eggemann@arm.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-09-30sched/core, x86/topology: Fix NUMA in package topology bugTim Chen
Current code can call set_cpu_sibling_map() and invoke sched_set_topology() more than once (e.g. on CPU hot plug). When this happens after sched_init_smp() has been called, we lose the NUMA topology extension to sched_domain_topology in sched_init_numa(). This results in incorrect topology when the sched domain is rebuilt. This patch fixes the bug and issues warning if we call sched_set_topology() after sched_init_smp(). Signed-off-by: Tim Chen <tim.c.chen@linux.intel.com> Signed-off-by: Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Mike Galbraith <efault@gmx.de> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: bp@suse.de Cc: jolsa@redhat.com Cc: rjw@rjwysocki.net Link: http://lkml.kernel.org/r/1474485552-141429-2-git-send-email-srinivas.pandruvada@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-09-30Merge branch 'linus' into sched/core, to pick up fixesIngo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-09-30sctp: fix the issue sctp_diag uses lock_sock in rcu_read_lockXin Long
When sctp dumps all the ep->assocs, it needs to lock_sock first, but now it locks sock in rcu_read_lock, and lock_sock may sleep, which would break rcu_read_lock. This patch is to get and hold one sock when traversing the list. After that and get out of rcu_read_lock, lock and dump it. Then it will traverse the list again to get the next one until all sctp socks are dumped. For sctp_diag_dump_one, it fixes this issue by holding asoc and moving cb() out of rcu_read_lock in sctp_transport_lookup_process. Fixes: 8f840e47f190 ("sctp: add the sctp_diag.c file") Signed-off-by: Xin Long <lucien.xin@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-09-30Merge branch 'sctp-fixes'David S. Miller
Xin Long says: ==================== sctp: a bunch of fixes for prsctp polices This patchset is to fix 2 issues for prsctp polices: 1. patch 1 and 2 fix "netperf-Throughput_Mbps -37.2% regression" issue when overloading the CPU. 2. patch 3 fix "prsctp polices should check both sides' prsctp_capable, instead of only local side". ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
2016-09-30sctp: change to check peer prsctp_capable when using prsctp policesXin Long
Now before using prsctp polices, sctp uses asoc->prsctp_enable to check if prsctp is enabled. However asoc->prsctp_enable is set only means local host support prsctp, sctp should not abandon packet if peer host doesn't enable prsctp. So this patch is to use asoc->peer.prsctp_capable to check if prsctp is enabled on both side, instead of asoc->prsctp_enable, as asoc's peer.prsctp_capable is set only when local and peer both enable prsctp. Fixes: a6c2f792873a ("sctp: implement prsctp TTL policy") Signed-off-by: Xin Long <lucien.xin@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-09-30sctp: remove prsctp_param from sctp_chunkXin Long
Now sctp uses chunk->prsctp_param to save the prsctp param for all the prsctp polices, we didn't need to introduce prsctp_param to sctp_chunk. We can just use chunk->sinfo.sinfo_timetolive for RTX and BUF polices, and reuse msg->expires_at for TTL policy, as the prsctp polices and old expires policy are mutual exclusive. This patch is to remove prsctp_param from sctp_chunk, and reuse msg's expires_at for TTL and chunk's sinfo.sinfo_timetolive for RTX and BUF polices. Note that sctp can't use chunk's sinfo.sinfo_timetolive for TTL policy, as it needs a u64 variables to save the expires_at time. This one also fixes the "netperf-Throughput_Mbps -37.2% regression" issue. Fixes: a6c2f792873a ("sctp: implement prsctp TTL policy") Signed-off-by: Xin Long <lucien.xin@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-09-30sctp: move sent_count to the memory hole in sctp_chunkXin Long
Now pahole sctp_chunk, it has 2 memory holes: struct sctp_chunk { struct list_head list; atomic_t refcnt; /* XXX 4 bytes hole, try to pack */ ... long unsigned int prsctp_param; int sent_count; /* XXX 4 bytes hole, try to pack */ This patch is to move up sent_count to fill the 1st one and eliminate the 2nd one. It's not just another struct compaction, it also fixes the "netperf- Throughput_Mbps -37.2% regression" issue when overloading the CPU. Fixes: a6c2f792873a ("sctp: implement prsctp TTL policy") Signed-off-by: Xin Long <lucien.xin@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-09-30tg3: Avoid NULL pointer dereference in tg3_io_error_detected()Milton Miller
While the driver is probing the adapter, an error may occur before the netdev structure is allocated and attached to pci_dev. In this case, not only netdev isn't available, but the tg3 private structure is also not available as it is just math from the NULL pointer, so dereferences must be skipped. The following trace is seen when the error is triggered: [1.402247] Unable to handle kernel paging request for data at address 0x00001a99 [1.402410] Faulting instruction address: 0xc0000000007e33f8 [1.402450] Oops: Kernel access of bad area, sig: 11 [#1] [1.402481] SMP NR_CPUS=2048 NUMA PowerNV [1.402513] Modules linked in: [1.402545] CPU: 0 PID: 651 Comm: eehd Not tainted 4.4.0-36-generic #55-Ubuntu [1.402591] task: c000001fe4e42a20 ti: c000001fe4e88000 task.ti: c000001fe4e88000 [1.402742] NIP: c0000000007e33f8 LR: c0000000007e3164 CTR: c000000000595ea0 [1.402787] REGS: c000001fe4e8b790 TRAP: 0300 Not tainted (4.4.0-36-generic) [1.402832] MSR: 9000000100009033 <SF,HV,EE,ME,IR,DR,RI,LE> CR: 28000422 XER: 20000000 [1.403058] CFAR: c000000000008468 DAR: 0000000000001a99 DSISR: 42000000 SOFTE: 1 GPR00: c0000000007e3164 c000001fe4e8ba10 c0000000015c5e00 0000000000000000 GPR04: 0000000000000001 0000000000000000 0000000000000039 0000000000000299 GPR08: 0000000000000000 0000000000000001 c000001fe4e88000 0000000000000006 GPR12: 0000000000000000 c00000000fb40000 c0000000000e6558 c000003ca1bffd00 GPR16: 0000000000000000 0000000000000000 0000000000000000 0000000000000000 GPR20: 0000000000000000 0000000000000000 0000000000000000 c000000000d52768 GPR24: c000000000d52740 0000000000000100 c000003ca1b52000 0000000000000002 GPR28: 0000000000000900 0000000000000000 c00000000152a0c0 c000003ca1b52000 [1.404226] NIP [c0000000007e33f8] tg3_io_error_detected+0x308/0x340 [1.404265] LR [c0000000007e3164] tg3_io_error_detected+0x74/0x340 This patch avoids the NULL pointer dereference by moving the access after the netdev NULL pointer check on tg3_io_error_detected(). Also, we add a check for netdev being NULL on tg3_io_resume() [suggested by Michael Chan]. Fixes: 0486a063b1ff ("tg3: prevent ifup/ifdown during PCI error recovery") Fixes: dfc8f370316b ("net/tg3: Release IRQs on permanent error") Tested-by: Guilherme G. Piccoli <gpiccoli@linux.vnet.ibm.com> Signed-off-by: Milton Miller <miltonm@us.ibm.com> Signed-off-by: Guilherme G. Piccoli <gpiccoli@linux.vnet.ibm.com> Acked-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2016-09-29Merge tag 'drm-fixes-for-v4.8-final' of ↵Linus Torvalds
git://people.freedesktop.org/~airlied/linux Pull drm fixes from Dave Airlie: "drm fixes for final 4.8. One big regression fix for udl, along with two amdgpu fixes and two nouveau fixes. All seems pretty safe and useful" * tag 'drm-fixes-for-v4.8-final' of git://people.freedesktop.org/~airlied/linux: drm/udl: fix line iterator in damage handling drm/radeon/si/dpm: add workaround for for Jet parts drm/amdgpu: disable CRTCs before teardown drm/nouveau: Revert "bus: remove cpu_coherent flag" drm/nouveau/fifo/nv04: avoid ramht race against cookie insertion
2016-09-29Merge branch 'libnvdimm-fixes' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm Pull libnvdimm fixes from Dan Williams: - Four fixes for "flush hint" support. Flush hints are addresses advertised by the ACPI 6+ NFIT (NVDIMM Firmware Interface Table) that when written and fenced guarantee that writes pending in platform write buffers (outside the cpu) have been flushed to media. They might also be used by hypervisors as a trigger condition to flush guest-persistent memory ranges to storage. Fix a potential data corruption issue, a broken definition of the hint array, a wrong allocation size for the unit test implementation of the flush hint table, and missing NULL check in an error path. The unit test, while it did not prevent these bugs from being merged, at least triggered occasional crashes in advance of production usages. - Fix handling of ACPI DSM error status results. The DSM mechanism allows communication with platform and memory device firmware. We correctly parse known errors, but were silently ignoring others. Fix it to consistently fail any command with a non-zero status return that we otherwise do not interpret / handle. * 'libnvdimm-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm: libnvdimm, region: fix flush hint table thinko nfit: fail DSMs that return non-zero status by default libnvdimm: fix devm_nvdimm_memremap() error path tools/testing/nvdimm: fix allocation range for mock flush hint tables nvdimm: fix PHYS_PFN/PFN_PHYS mixup
2016-09-29Merge tag 'perf-core-for-mingo-20160929' of ↵Ingo Molnar
git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux into perf/core Pull perf/core improvements and fixes from Arnaldo Carvalho de Melo: User visible changes: --------------------- New features: - Add support for using symbols in address filters with Intel PT and ARM CoreSight (hardware assisted tracing facilities) (Adrian Hunter, Mathieu Poirier) Fixes: - Fix MMAP event synthesis for pre-existing threads when no hugetlbfs mount is in place (Adrian Hunter) - Don't ignore kernel idle symbols in 'perf script' (Adrian Hunter) - Assorted Intel PT fixes (Adrian Hunter) Improvements: - Fix handling of C++ symbols in 'perf probe' (Masami Hiramatsu) - Beautify sched_[gs]et_attr return value in 'perf trace' (Arnaldo Carvalho de Melo) Infrastructure changes: ----------------------- New features: - Add dwarf unwind 'perf test' for powerpc (Ravi Bangoria) Fixes: - Fix error paths in 'perf record' (Adrian Hunter) Documentation: - Update documentation info about quipper, a C++ parser for converting to/from perf.data/chromium profiling format (Simon Que) Build Fixes: Fix building in 32 bit platform with libbabeltrace (Wang Nan) Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-09-29x86/init: Fix cr4_init_shadow() on CR4-less machinesAndy Lutomirski
cr4_init_shadow() will panic on 486-like machines without CR4. Fix it using __read_cr4_safe(). Reported-by: david@saggiorato.net Signed-off-by: Andy Lutomirski <luto@kernel.org> Reviewed-by: Borislav Petkov <bp@suse.de> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Fixes: 1e02ce4cccdc ("x86: Store a per-cpu shadow copy of CR4") Link: http://lkml.kernel.org/r/43a20f81fb504013bf613913dc25574b45336a61.1475091074.git.luto@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2016-09-29MIPS: Fix detection of unsupported highmem with cache aliasesPaul Burton
The paging_init() function contains code which detects that highmem is in use but unsupported due to dcache aliasing. However this code was ineffective because it was being run before the caches are probed, meaning that cpu_has_dc_aliases would always evaluate to false (unless a platform overrides it to a compile-time constant) and the detection of the unsupported case is never triggered. The kernel would then go on to attempt to use highmem & either hit coherency issues or trigger the BUG_ON in flush_kernel_dcache_page(). Fix this by running paging_init() later than cpu_cache_init(), such that the cpu_has_dc_aliases macro will evaluate correctly & the unsupported highmem case will be detected successfully. This then leads to a formerly hidden issue in that mem_init_free_highmem() will attempt to free all highmem pages, even though we're avoiding use of them & don't have valid page structs for them. This leads to an invalid pointer dereference & a TLB exception. Avoid this by skipping the loop in mem_init_free_highmem() if cpu_has_dc_aliases evaluates true. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: Rabin Vincent <rabinv@axis.com> Cc: Matt Redfearn <matt.redfearn@imgtec.com> Cc: Jerome Marchand <jmarchan@redhat.com> Cc: Alexander Sverdlin <alexander.sverdlin@gmail.com> Cc: Aurelien Jarno <aurelien@aurel32.net> Cc: Jaedon Shin <jaedon.shin@gmail.com> Cc: Toshi Kani <toshi.kani@hpe.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Sergey Ryazanov <ryazanov.s.a@gmail.com> Cc: Jonas Gorski <jogo@openwrt.org> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14184/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-09-29MIPS: Malta: Fix IOCU disable switch read for MIPS64Paul Burton
Malta boards used with CPU emulators feature a switch to disable use of an IOCU. Software has to check this switch & ignore any present IOCU if the switch is closed. The read used to do this was unsafe for 64 bit kernels, as it simply casted the address 0xbf403000 to a pointer & dereferenced it. Whilst in a 32 bit kernel this would access kseg1, in a 64 bit kernel this attempts to access xuseg & results in an address error exception. Fix by accessing a correctly formed ckseg1 address generated using the CKSEG1ADDR macro. Whilst modifying this code, define the name of the register and the bit we care about within it, which indicates whether PCI DMA is routed to the IOCU or straight to DRAM. The code previously checked that bit 0 was also set, but the least significant 7 bits of the CONFIG_GEN0 register contain the value of the MReqInfo signal provided to the IOCU OCP bus, so singling out bit 0 makes little sense & that part of the check is dropped. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: b6d92b4a6bdb ("MIPS: Add option to disable software I/O coherency.") Cc: Matt Redfearn <matt.redfearn@imgtec.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: Kees Cook <keescook@chromium.org> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14187/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-09-29MIPS: Fix BUILD_ROLLBACK_PROLOGUE for microMIPSPaul Burton
When the kernel is built for microMIPS, branches targets need to be known to be microMIPS code in order to result in bit 0 of the PC being set. The branch target in the BUILD_ROLLBACK_PROLOGUE macro was simply the end of the macro, which may be pointing at padding rather than at code. This results in recent enough GNU linkers complaining like so: mips-img-linux-gnu-ld: arch/mips/built-in.o: .text+0x3e3c: Unsupported branch between ISA modes. mips-img-linux-gnu-ld: final link failed: Bad value Makefile:936: recipe for target 'vmlinux' failed make: *** [vmlinux] Error 1 Fix this by changing the branch target to be the start of the appropriate handler, skipping over any padding. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14019/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-09-29MIPS: clear execution hazard after changing FTLB enablePaul Burton
On current P-series cores from Imagination the FTLB can be enabled or disabled via a bit in the Config6 register, and an execution hazard is created by changing the value of bit. The ftlb_disable function already cleared that hazard but that does no good for other callers. Clear the hazard in the set_ftlb_enable function that creates it, and only for the cores where it applies. This has the effect of reverting c982c6d6c48b ("MIPS: cpu-probe: Remove cp0 hazard barrier when enabling the FTLB") which was incorrect. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: c982c6d6c48b ("MIPS: cpu-probe: Remove cp0 hazard barrier when enabling the FTLB") Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14023/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-09-29MIPS: Configure FTLB after probing TLB sizes from config4Paul Burton
On some cores (proAptiv, P5600) we make use of the sizes of the TLBs to determine the desired FTLB:VTLB write ratio. However set_ftlb_enable & thus calculate_ftlb_probability is called before decode_config4. This results in us calculating a probability based on zero sizes, and we end up setting FTLBP=3 for a 3:1 FTLB:VTLB write ratio in all cases. This will make abysmal use of the available FTLB resources in the affected cores. Fix this by configuring the FTLB probability after having decoded config4. However we do need to have enabled the FTLB before that point such that fields in config4 actually reflect that an FTLB is present. So set_ftlb_enable is now called twice, with flags indicating that it should configure the write probability only the second time. Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: cf0a8aa0226d ("MIPS: cpu-probe: Set the FTLB probability bit on supported cores") Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14022/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-09-29MIPS: Stop setting I6400 FTLBPPaul Burton
The FTLBP field in Config7 for the I6400 is intended as chicken bits for debugging rather than as a field that software actually makes use of. For best performance, FTLBP should be left at its default value of 0 with all TLB writes hitting the FTLB by default. Additionally, since set_ftlb_enable is called from decode_configs before decode_config4 which determines the size of the TLBs, this was previously always setting FTLBP=3 for a 3:1 FTLB:VTLB write ratio which makes abysmal use of the available FTLB resources. This effectively reverts b0c4e1b79d8a ("MIPS: Set up FTLB probability for I6400"). Signed-off-by: Paul Burton <paul.burton@imgtec.com> Fixes: b0c4e1b79d8a ("MIPS: Set up FTLB probability for I6400") Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14021/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-09-29MIPS: DEC: Avoid la pseudo-instruction in delay slotsRalf Baechle
When expanding the la or dla pseudo-instruction in a delay slot the GNU assembler will complain should the pseudo-instruction expand to multiple actual instructions, since only the first of them will be in the delay slot leading to the pseudo-instruction being only partially executed if the branch is taken. Use of PTR_LA in the dec int-handler.S leads to such warnings: arch/mips/dec/int-handler.S: Assembler messages: arch/mips/dec/int-handler.S:149: Warning: macro instruction expanded into multiple instructions in a branch delay slot arch/mips/dec/int-handler.S:198: Warning: macro instruction expanded into multiple instructions in a branch delay slot Avoid this by open coding the PTR_LA macros. Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-09-29MIPS: Octeon: mark GPIO controller node not populated after IRQ init.Steven J. Hill
We clear the OF_POPULATED flag for the GPIO controller node on Octeon processors. Otherwise, none of the devices hanging on the GPIO lines are probed. The 'gpio-leds' driver on OCTEON failed to probe in addition to other devices on Cavium 71xx and 78xx development boards. Fixes: 15cc2ed6dcf9 ("of/irq: Mark initialised interrupt controllers as populated") Signed-off-by: Steven J. Hill <steven.hill@cavium.com> Tested-by: Aaro Koskinen <aaro.koskinen@iki.fi> Cc: David Daney <david.daney@cavium.com> Cc: Rob Herring <robh@kernel.org> Cc: linux-mips@linux-mips.org Cc: devicetree@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14091/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-09-29MIPS: uprobes: fix use of uninitialised variableMarcin Nowakowski
arch_uprobe_pre_xol needs to emulate a branch if a branch instruction has been replaced with a breakpoint, but in fact an uninitialised local variable was passed to the emulator routine instead of the original instruction Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com> Fixes: 40e084a506eb ('MIPS: Add uprobes support.') Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14300/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-09-29MIPS: uprobes: remove incorrect set_orig_insnMarcin Nowakowski
Generic kernel code implements a weak version of set_orig_insn that moves cached 'insn' from arch_uprobe to the original code location when the trap is removed. MIPS variant used arch_uprobe->orig_inst which was never initialised properly, so this code only inserted a nop instead of the original instruction. With that change orig_inst can also be safely removed. Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com> Fixes: 40e084a506eb ('MIPS: Add uprobes support.') Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14299/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-09-29MIPS: fix uretprobe implementationMarcin Nowakowski
arch_uretprobe_hijack_return_addr should replace the return address for a call with a trampoline address. Signed-off-by: Marcin Nowakowski <marcin.nowakowski@imgtec.com> Fixes: 40e084a506eb ('MIPS: Add uprobes support.') Cc: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/14298/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-09-29MIPS: smp-cps: Avoid BUG() when offlining pre-r6 CPUsMatt Redfearn
Commit 0d2808f338c7 ("MIPS: smp-cps: Add support for CPU hotplug of MIPSr6 processors") added a call to mips_cm_lock_other in order to lock the CPC in CPUs containing a version 3 or higher Coherence Manager, which use the general CM core other register, where previous CMs had a dedicated core other register for the CPC. A kernel BUG() is triggered, however, if mips_cm_lock_other is called with a VP other than 0 on a CPU with CM < 3, a condition introduced by 0d2808f338c7. Avoid the BUG() by always locking VP0 when locking the CPC, since the required register, cpc_stat_conf, is shared by all vps in a core. Fixes: 0d2808f338c7 ("MIPS: smp-cps: Add support for CPU hotplug...) Signed-off-by: Matt Redfearn <matt.redfearn@imgtec.com> Cc: Qais Yousef <qsyousef@gmail.com> Cc: Masahiro Yamada <yamada.masahiro@socionext.com> Cc: James Hogan <james.hogan@imgtec.com> Cc: Paul Burton <paul.burton@imgtec.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mips@linux-mips.org Cc: linux-kernel@vger.kernel.org Patchwork: https://patchwork.linux-mips.org/patch/14297/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
2016-09-29ARM: 8617/1: dma: fix dma_max_pfn()Roger Quadros
Since commit 6ce0d2001692 ("ARM: dma: Use dma_pfn_offset for dma address translation"), dma_to_pfn() already returns the PFN with the physical memory start offset so we don't need to add it again. This fixes USB mass storage lock-up problem on systems that can't do DMA over the entire physical memory range (e.g.) Keystone 2 systems with 4GB RAM can only do DMA over the first 2GB. [K2E-EVM]. What happens there is that without this patch SCSI layer sets a wrong bounce buffer limit in scsi_calculate_bounce_limit() for the USB mass storage device. dma_max_pfn() evaluates to 0x8fffff and bounce_limit is set to 0x8fffff000 whereas maximum DMA'ble physical memory on Keystone 2 is 0x87fffffff. This results in non DMA'ble pages being given to the USB controller and hence the lock-up. NOTE: in the above case, USB-SCSI-device's dma_pfn_offset was showing as 0. This should have really been 0x780000 as on K2e, LOWMEM_START is 0x80000000 and HIGHMEM_START is 0x800000000. DMA zone is 2GB so dma_max_pfn should be 0x87ffff. The incorrect dma_pfn_offset for the USB storage device is because USB devices are not correctly inheriting the dma_pfn_offset from the USB host controller. This will be fixed by a separate patch. Fixes: 6ce0d2001692 ("ARM: dma: Use dma_pfn_offset for dma address translation") Cc: stable@vger.kernel.org Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Santosh Shilimkar <santosh.shilimkar@oracle.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Olof Johansson <olof@lixom.net> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Linus Walleij <linus.walleij@linaro.org> Reported-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: Roger Quadros <rogerq@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-09-29ARM: 8616/1: dt: Respect property size when parsing CPUsRobin Murphy
Whilst MPIDR values themselves are less than 32 bits, it is still perfectly valid for a DT to have #address-cells > 1 in the CPUs node, resulting in the "reg" property having leading zero cell(s). In that situation, the big-endian nature of the data conspires with the current behaviour of only reading the first cell to cause the kernel to think all CPUs have ID 0, and become resoundingly unhappy as a consequence. Take the full property length into account when parsing CPUs so as to be correct under any circumstances. Cc: Russell King <linux@armlinux.org.uk> Signed-off-by: Robin Murphy <robin.murphy@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-09-29perf tests: Add dwarf unwind test for powerpcRavi Bangoria
The user stack dump feature was recently added for powerpc. But there was no test case available to test it. This test works same as on other architectures by preparing a stack frame on the perf test thread and comparing each frame by unwinding it. $ ./perf test 50 50: Test dwarf unwind : Ok User stack dump for powerpc: https://lkml.org/lkml/2016/4/28/482 Signed-off-by: Ravi Bangoria <ravi.bangoria@linux.vnet.ibm.com> Acked-by: Jiri Olsa <jolsa@kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Anju T Sudhakar <anju@linux.vnet.ibm.com> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Matt Fleming <matt.fleming@intel.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Wang Nan <wangnan0@huawei.com> Cc: linuxppc-dev@lists.ozlabs.org Link: http://lkml.kernel.org/r/1474267100-31079-1-git-send-email-ravi.bangoria@linux.vnet.ibm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-09-29perf probe: Match linkage name with mangled nameMasami Hiramatsu
Match linkage name with mangled name if exists. The linkage_name is used for storing mangled name of the object. Thus, this allows 'perf probe' to find appropriate probe point from mangled symbol as below. E.g. without this fix: ---- $ perf probe -x /usr/lib64/libstdc++.so.6 \ -D _ZNKSt15basic_fstreamXXIwSt11char_traitsIwEE7is_openEv Probe point '_ZNKSt15basic_fstreamXXIwSt11char_traitsIwEE7is_openEv' not found. Error: Failed to add events. ---- With this fix, perf probe can find the correct one. ---- $ perf probe -x /usr/lib64/libstdc++.so.6 \ -D _ZNKSt15basic_fstreamXXIwSt11char_traitsIwEE7is_openEv p:probe_libstdc/_ZNKSt15basic_fstreamXXIwSt11char_traitsIwEE7is_openEv /usr/lib64/libstdc++.so.6.0.22:0x8ca60 ---- Committer notes: After the fix, setting it for real (no -D/--definition, that amounts to a --dry-run): # perf probe -x /usr/lib64/libstdc++.so.6 _ZNKSt15basic_fstreamXXIwSt11char_traitsIwEE7is_openEv Added new event: probe_libstdc:_ZNKSt15basic_fstreamXXIwSt11char_traitsIwEE7is_openEv (on _ZNKSt15basic_fstreamXXIwSt11char_traitsIwEE7is_openEv in /usr/lib64/libstdc++.so.6.0.22) You can now use it in all perf tools, such as: perf record -e probe_libstdc:_ZNKSt15basic_fstreamXXIwSt11char_traitsIwEE7is_openEv -aR sleep 1 # perf probe -l probe_libstdc:* probe_libstdc:_ZNKSt15basic_fstreamXXIwSt11char_traitsIwEE7is_openEv (on is_open@libstdc++-v3/include/fstream in /usr/lib64/libstdc++.so.6.0.22) # Reported-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Jiri Olsa <jolsa@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/147464493162.29804.16715053505069382443.stgit@devbox Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
2016-09-29perf probe: Fix to cut off incompatible chars from group nameMasami Hiramatsu
Cut off the characters which can not use for group name of uprobes when making it based on executable filename. For example, if the exec name is libstdc++.so, without this fix perf probe generates "probe_libstdc++" as the group name, but it is failed to set because '+' can not be used for group name. With this fix perf accepts only alphabet, number or '_' for group name, thus perf generates "probe_libstdc" as the group name. E.g. with this fix, you can see the event name has no "+". ---- $ ./perf probe -x /usr/lib64/libstdc++.so.6 -D is_open p:probe_libstdc/is_open /usr/lib64/libstdc++.so.6.0.22:0x8ca80 p:probe_libstdc/is_open_1 /usr/lib64/libstdc++.so.6.0.22:0x8ca70 p:probe_libstdc/is_open_2 /usr/lib64/libstdc++.so.6.0.22:0x8ca60 p:probe_libstdc/is_open_3 /usr/lib64/libstdc++.so.6.0.22:0xb0ad0 p:probe_libstdc/is_open_4 /usr/lib64/libstdc++.so.6.0.22:0xecca9 ---- Committer note: Before this fix: # perf probe -x /usr/lib64/libstdc++.so.6 is_open Failed to write event: Invalid argument Error: Failed to add events. # After the fix: # perf probe -x /usr/lib64/libstdc++.so.6 is_open Added new events: probe_libstdc:is_open (on is_open in /usr/lib64/libstdc++.so.6.0.22) probe_libstdc:is_open_1 (on is_open in /usr/lib64/libstdc++.so.6.0.22) probe_libstdc:is_open_2 (on is_open in /usr/lib64/libstdc++.so.6.0.22) probe_libstdc:is_open_3 (on is_open in /usr/lib64/libstdc++.so.6.0.22) probe_libstdc:is_open_4 (on is_open in /usr/lib64/libstdc++.so.6.0.22) You can now use it in all perf tools, such as: perf record -e probe_libstdc:is_open_4 -aR sleep 1 # perf probe -l probe_libstdc:* probe_libstdc:is_open (on is_open@libstdc++-v3/include/fstream in /usr/lib64/libstdc++.so.6.0.22) probe_libstdc:is_open_1 (on is_open@libstdc++-v3/include/fstream in /usr/lib64/libstdc++.so.6.0.22) probe_libstdc:is_open_2 (on is_open@libstdc++-v3/include/fstream in /usr/lib64/libstdc++.so.6.0.22) probe_libstdc:is_open_3 (on is_open@src/c++98/basic_file.cc in /usr/lib64/libstdc++.so.6.0.22) probe_libstdc:is_open_4 (on stdio_filebuf:5@include/ext/stdio_filebuf.h in /usr/lib64/libstdc++.so.6.0.22) # Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: Jiri Olsa <jolsa@kernel.org> Cc: David Ahern <dsahern@gmail.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/147464491667.29804.9553638175441827970.stgit@devbox Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>