summaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2016-11-30powerpc/prom: Switch to using structs for ibm_architecture_vecMichael Ellerman
Now that we've defined structures to describe each of the client architecture vectors, we can use those to construct the value we pass to firmware. This avoids the tricks we previously played with the W() macro, allows us to properly endian annotate fields, and should help to avoid bugs introduced by failing to have the correct number of zero pad bytes between fields. It also means we can avoid hard coding IBM_ARCH_VEC_NRCORES_OFFSET in order to update the max_cpus value and instead just set it. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-30powerpc/prom: Define structs for client architecture vectorsMichael Ellerman
The "client architecture vectors" are a series of structures we pass to firmware to define various things, such as what processors we support and many other options. Each structure is entirely different so we have to define a different struct for each one, but that's OK. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-30powerpc/pseries: add definitions for new H_SIGNAL_SYS_RESET hcallNicholas Piggin
This has not made its way to a PAPR release yet, but we have an hcall number assigned. H_SIGNAL_SYS_RESET = 0x380 Syntax: hcall(uint64 H_SIGNAL_SYS_RESET, int64 target); Generate a system reset NMI on the threads indicated by target. Values for target: -1 = target all online threads including the caller -2 = target all online threads except for the caller All other negative values: reserved Positive values: The thread to be targeted, obtained from the value of the "ibm,ppc-interrupt-server#s" property of the CPU in the OF device tree. Semantics: - Invalid target: return H_Parameter. - Otherwise: Generate a system reset NMI on target thread(s), return H_Success. This will be used by crash/debug code to get stuck CPUs into a known state. Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-30powerpc: Enable CONFIG_KEXEC_FILE in powerpc server defconfigs.Thiago Jung Bauermann
Enable CONFIG_KEXEC_FILE in powernv_defconfig, ppc64_defconfig and pseries_defconfig. It depends on CONFIG_CRYPTO_SHA256=y, so add that as well. Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-30powerpc/kexec: Enable kexec_file_load() syscallThiago Jung Bauermann
Define the Kconfig symbol so that the kexec_file_load() code can be built, and wire up the syscall so that it can be called. Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-30powerpc: Add purgatory for kexec_file_load() implementation.Thiago Jung Bauermann
This purgatory implementation is based on the versions from kexec-tools and kexec-lite, with additional changes. Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-30powerpc: Add support code for kexec_file_load()Thiago Jung Bauermann
This patch adds the support code needed for implementing kexec_file_load() on powerpc. This consists of functions to load the ELF kernel, either big or little endian, and setup the purgatory enviroment which switches from the first kernel to the second kernel. None of this code is built yet, as it depends on CONFIG_KEXEC_FILE which we have not yet defined. Although we could define CONFIG_KEXEC_FILE in this patch, we'd then have a window in history where the kconfig symbol is present but the syscall is not, which would be awkward. Signed-off-by: Josh Sklar <sklar@linux.vnet.ibm.com> Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-30powerpc: Change places using CONFIG_KEXEC to use CONFIG_KEXEC_CORE instead.Thiago Jung Bauermann
Commit 2965faa5e03d ("kexec: split kexec_load syscall from kexec core code") introduced CONFIG_KEXEC_CORE so that CONFIG_KEXEC means whether the kexec_load system call should be compiled-in and CONFIG_KEXEC_FILE means whether the kexec_file_load system call should be compiled-in. These options can be set independently from each other. Since until now powerpc only supported kexec_load, CONFIG_KEXEC and CONFIG_KEXEC_CORE were synonyms. That is not the case anymore, so we need to make a distinction. Almost all places where CONFIG_KEXEC was being used should be using CONFIG_KEXEC_CORE instead, since kexec_file_load also needs that code compiled in. Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-30kexec_file: Factor out kexec_locate_mem_hole from kexec_add_buffer.Thiago Jung Bauermann
kexec_locate_mem_hole will be used by the PowerPC kexec_file_load implementation to find free memory for the purgatory stack. Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com> Acked-by: Dave Young <dyoung@redhat.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-30kexec_file: Change kexec_add_buffer to take kexec_buf as argument.Thiago Jung Bauermann
This is done to simplify the kexec_add_buffer argument list. Adapt all callers to set up a kexec_buf to pass to kexec_add_buffer. In addition, change the type of kexec_buf.buffer from char * to void *. There is no particular reason for it to be a char *, and the change allows us to get rid of 3 existing casts to char * in the code. Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com> Acked-by: Dave Young <dyoung@redhat.com> Acked-by: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-30kexec_file: Allow arch-specific memory walking for kexec_add_bufferThiago Jung Bauermann
Allow architectures to specify a different memory walking function for kexec_add_buffer. x86 uses iomem to track reserved memory ranges, but PowerPC uses the memblock subsystem. Signed-off-by: Thiago Jung Bauermann <bauerman@linux.vnet.ibm.com> Acked-by: Dave Young <dyoung@redhat.com> Acked-by: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-30powerpc/mm: Fix no execute fault handling on pre-POWER5Balbir Singh
Aneesh/Ben reported that the change to do_page_fault() we made in commit 1d18ad026844 ("powerpc/mm: Detect instruction fetch denied and report") needs to handle the case where CPU_FTR_COHERENT_ICACHE is missing but we have CPU_FTR_NOEXECUTE. In those cases the check added for SRR1_ISI_N_OR_G might trigger a false positive. This patch adds a check for CPU_FTR_COHERENT_ICACHE in addition to the MSR value. Fixes: 1d18ad026844 ("powerpc/mm: Detect instruction fetch denied and report") Reported-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Acked-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-29powerpc/boot: Fix rebuild when changing kernel endianMichael Ellerman
Now that we don't set ARCH incorrectly when calling the boot Makefile, we can use the generic cpp_lds_S rule for converting our zImage.lds.S into zImage.lds. The main advantage of using the generic rule is that it correctly uses if_changed, which means we correctly regenerate the linker script when switching endian. Fixing that means we are finally able to build one endian and then rebuild the other endian without requiring to clean between builds. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-29powerpc/boot: All uses of if_changed should depend on FORCEMichael Ellerman
If we're using if_changed then we must depend on FORCE, so that if_changed gets a chance to check if something changed. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-29powerpc: Stop passing ARCH=ppc64 to boot MakefileMichael Ellerman
Back in 2005 when the ppc/ppc64 merge started, we used to build the kernel code in arch/powerpc but use the boot code from arch/ppc or arch/ppc64 depending on whether we were building for 32 or 64-bit. Originally we called the boot Makefile passing ARCH=$(OLDARCH), where OLDARCH was ppc or ppc64. In commit 20f629549b30 ("powerpc: Make building the boot image work for both 32-bit and 64-bit") (2005-10-11) we split the call for 32/64-bit using an ifeq check, because the two Makefiles took different targets, and explicitly passed ARCH=ppc64 for the 64-bit case and ARCH=ppc for the 32-bit case. Then in commit 94b212c29f68 ("powerpc: Move ppc64 boot wrapper code over to arch/powerpc") (2005-11-16) we moved the boot code into arch/powerpc and dropped the ppc case, but kept passing ARCH=ppc64 to arch/powerpc/boot/Makefile. Since then there have been several more boot targets added, all of which have copied the ARCH=ppc64 setting, such that now we have four targets using it. Currently it seems that nothing actually uses the ARCH value, but that's basically just luck, and in particular it prevents us from using the generic cpp_lds_S rule. It's also clearly wrong, ARCH=ppc64 is dead, buried and cremated. Fix it by dropping the setting of ARCH completely, the correct value is exported by the top level Makefile. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-28powerpc/mm: Batch tlb flush when invalidating pte entriesAneesh Kumar K.V
This will improve the task exit case, by batching tlb invalidates. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-28powerpc/mm: update radix__pte_update to not do full mm tlb flushAneesh Kumar K.V
When we are updating a pte, we just need to flush the tlb mapping that pte. Right now we do a full mm flush because we don't track page size. Now that we have page size details in pte use that to do the optimized flush Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-28powerpc/mm: update radix__ptep_set_access_flag to not do full mm tlb flushAneesh Kumar K.V
When we are updating a pte, we just need to flush the tlb mapping that pte. Right now we do a full mm flush because we don't track the page size. Now that we have page size details in pte use that to do the optimized flush Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-28powerpc/mm: Add radix__tlb_flush_pte_p9_dd1()Aneesh Kumar K.V
Now that we have page size details encoded in pte using software pte bits, use that to find the page size needed for tlb flush. This function should only be used on P9 DD1, so give it a horrible name to make that clear. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-28powerpc/mm: Introduce _PAGE_LARGE software pte bitsAneesh Kumar K.V
This patch adds a new software defined pte bit. We use the reserved fields of ISA 3.0 pte definition since we will only be using this on DD1 code paths. We can possibly look at removing this code later. The software bit will be used to differentiate between 64K/4K and 2M ptes. This helps in finding the page size mapping by a pte so that we can do efficient tlb flush. We don't support 1G hugetlb pages yet. So we add a DEBUG WARN_ON to catch wrong usage. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-28powerpc/mm/hugetlb: Handle hugepage size supported by hash configAneesh Kumar K.V
W.r.t hash page table config, we support 16MB and 16GB as the hugepage size. Update the hstate_get_psize to handle 16M and 16G. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-28powerpc/mm: Rename hugetlb-radix.h to hugetlb.hAneesh Kumar K.V
We will start moving some book3s specific hugetlb functions there. Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-28powerpc/64e: Don't branch to dot symbolsNicholas Piggin
This converts one that was missed by b1576fec7f4d ("powerpc: No need to use dot symbols when branching to a function"). Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-28powerpc/64e: Convert cmpi to cmpwi in head_64.SNicholas Piggin
From 80f23935cadb ("powerpc: Convert cmp to cmpd in idle enter sequence"): PowerPC's "cmp" instruction has four operands. Normally people write "cmpw" or "cmpd" for the second cmp operand 0 or 1. But, frequently people forget, and write "cmp" with just three operands. With older binutils this is silently accepted as if this was "cmpw", while often "cmpd" is wanted. With newer binutils GAS will complain about this for 64-bit code. For 32-bit code it still silently assumes "cmpw" is what is meant. In this case, cmpwi is called for, so this is just a build fix for new toolchains. Cc: stable@vger.kernel.org # v3.0+ Signed-off-by: Nicholas Piggin <npiggin@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-26powerpc/mm/radix: Prevent kernel execution of user spaceBalbir Singh
ISA 3 defines new encoded access authority that allows instruction access prevention in privileged mode and allows normal access to problem state. This patch just enables IAMR (Instruction Authority Mask Register), enabling AMR would require more work. I've tested this with a buggy driver and a simple payload. The payload is specific to the build I've tested. mpe: Also tested with LKDTM: # echo EXEC_USERSPACE > /sys/kernel/debug/provoke-crash/DIRECT lkdtm: Performing direct entry EXEC_USERSPACE lkdtm: attempting ok execution at c0000000005bf560 lkdtm: attempting bad execution at 00003fff8d940000 Unable to handle kernel paging request for instruction fetch Faulting instruction address: 0x3fff8d940000 Oops: Kernel access of bad area, sig: 11 [#1] NIP: 00003fff8d940000 LR: c0000000005bfa58 CTR: 00003fff8d940000 REGS: c0000000f1fcf900 TRAP: 0400 Not tainted (4.9.0-rc5-compiler_gcc-6.2.0-00109-g956dbc06232a) MSR: 9000000010009033 <SF,HV,EE,ME,IR,DR,RI,LE> CR: 48002222 XER: 00000000 ... Call Trace: lkdtm_EXEC_USERSPACE+0x104/0x120 (unreliable) lkdtm_do_action+0x3c/0x80 direct_entry+0x100/0x1b0 full_proxy_write+0x94/0x100 __vfs_write+0x3c/0x1b0 vfs_write+0xcc/0x230 SyS_write+0x60/0x110 system_call+0x38/0xfc Signed-off-by: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-25powerpc/mm: Detect instruction fetch denied and reportBalbir Singh
ISA 3 allows for prevention of instruction fetch and execution of user mode pages. If such an error occurs, SRR1 bit 35 reports the error. We catch and report the error in do_page_fault(). Signed-off-by: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-25powerpc/mm/radix: Setup AMOR in HV mode to allow key 0Balbir Singh
Setup AMOR (Authority Mask Override Register) in HV mode so that the host and guest kernel can in turn setup IAMR. This allows us to enable key 0 in a following patch. Reported-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Signed-off-by: Balbir Singh <bsingharora@gmail.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-25powernv: Clear SPRN_PSSCR when a POWER9 CPU comes onlineGautham R. Shenoy
Ensure that PSSCR is set to a safe value corresponding to no state-loss each time a POWER9 CPU comes online. Signed-off-by: Gautham R. Shenoy <ego@linux.vnet.ibm.com> Acked-By: Michael Neuling <mikey@neuling.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-25powerpc/xmon: Add 'dt' command to dump trace buffersMichael Ellerman
There is a nice interface for asking ftrace to dump all its tracing buffers. The only down side for use in xmon is that it uses printk. Depending on circumstances printk may not work when in xmon, but it also may, so add a 'dt' command which dumps the ftrace buffers, and add a note to the help to mentiont that it uses printk. Calling this routine also disables tracing, which is problematic if you return from xmon and expect the system to keep operating normally. So after we do the dump turn tracing back on. Both functions already have nop versions defined for when ftrace is not enabled, so we don't need any extra #ifdefs. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-25powerpc/of_platform: Use builtin_platform_driverGeliang Tang
Use builtin_platform_driver() helper to simplify the code. Signed-off-by: Geliang Tang <geliangtang@gmail.com> Acked-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-25cxl: drop duplicate header sched.hGeliang Tang
Drop duplicate header sched.h from native.c. Signed-off-by: Geliang Tang <geliangtang@gmail.com> Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-25powerpc: Fix __cmpxchg() to take a volatile ptr againMichael Ellerman
In commit d0563a1297e2 ("powerpc: Implement {cmp}xchg for u8 and u16") we removed the volatile from __cmpxchg(). This is leading to warnings such as: drivers/gpu/drm/drm_lock.c: In function ‘drm_lock_take’: arch/powerpc/include/asm/cmpxchg.h:484:37: warning: passing argument 1 of ‘__cmpxchg’ discards ‘volatile’ qualifier from pointer target (__typeof__(*(ptr))) __cmpxchg((ptr), (unsigned long)_o_, \ There doesn't seem to be consensus across architectures whether the argument is volatile or not, so at least for now put the volatile back. Fixes: d0563a1297e2 ("powerpc: Implement {cmp}xchg for u8 and u16") Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-24Merge branch 'topic/ppc-kvm' into nextMichael Ellerman
Merge the topic branch we're sharing with the kvm-ppc tree.
2016-11-23cxl: Fix coccinelle warningsAndrew Donnellan
Fix the following coccinelle warnings: drivers/misc/cxl/debugfs.c:46:0-23: WARNING: fops_io_x64 should be defined with DEFINE_DEBUGFS_ATTRIBUTE drivers/misc/cxl/guest.c:890:5-26: WARNING: Comparison to bool drivers/misc/cxl/irq.c:107:3-23: WARNING: Assignment of bool to 0/1 drivers/misc/cxl/native.c:57:2-3: Unneeded semicolon drivers/misc/cxl/native.c:170:2-3: Unneeded semicolon Signed-off-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com> Acked-by: Frederic Barrat <fbarrat@linux.vnet.ibm.com> Reviewed-by: Matthew R. Ochs <mrochs@linux.vnet.ibm.com> Acked-by: Ian Munsie <imunsie@au1.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-23powerpc/32: Change the stack protector canary value per taskChristophe Leroy
Partially copied from commit df0698be14c66 ("ARM: stack protector: change the canary value per task") A new random value for the canary is stored in the task struct whenever a new task is forked. This is meant to allow for different canary values per task. On powerpc, GCC expects the canary value to be found in a global variable called __stack_chk_guard. So this variable has to be updated with the value stored in the task struct whenever a task switch occurs. Because the variable GCC expects is global, this cannot work on SMP unfortunately. So, on SMP, the same initial canary value is kept throughout, making this feature a bit less effective although it is still useful. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-23powerpc: Initial stack protector (-fstack-protector) supportChristophe Leroy
Partialy copied from commit c743f38013aef ("ARM: initial stack protector (-fstack-protector) support") This is the very basic stuff without the changing canary upon task switch yet. Just the Kconfig option and a constant canary value initialized at boot time. Signed-off-by: Christophe Leroy <christophe.leroy@c-s.fr> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-23powerpc: Implement {cmp}xchg for u8 and u16Pan Xinhui
Implement xchg{u8,u16}{local,relaxed}, and cmpxchg{u8,u16}{,local,acquire,relaxed}. It works on all ppc. remove volatile of first parameter in __cmpxchg_local and __cmpxchg Suggested-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Pan Xinhui <xinhui.pan@linux.vnet.ibm.com> Acked-by: Boqun Feng <boqun.feng@gmail.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-23powerpc/pseries/ibmebus: Remove legacy suspend/resume supportLars-Peter Clausen
There are no ibmebus driver that make use of legacy suspend/resume. This patch removes the support for it from ibmebus framework, new ibmebus driver (as unlikely as they are) wanting to use suspend/resume should use dev_pm_ops. Since there aren't any special bus specific things to do during suspend/resume and since the PM core will automatically fallback directly to using the device's PM ops if no bus PM ops are specified there is no need to have any special ibmebus PM ops at all. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-23powerpc/kprobes: Invoke handlers directlyNaveen N. Rao
Invoke the kprobe handlers directly rather than through notify_die(), to reduce path taken for handling kprobes. Similar to commit 6f6343f53d13 ("kprobes/x86: Call exception handlers directly from do_int3/do_debug"). While at it, rename post_kprobe_handler() to kprobe_post_handler() for more uniform naming. Reported-by: Masami Hiramatsu <mhiramat@kernel.org> Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-23powerpc: Remove extraneous header from asm-prototypes.hNaveen N. Rao
Commit 03465f899bda ("powerpc: Use kprobe blacklist for exception handlers") removed __kprobes annotation from some of the prototypes, but left the kprobes header include directive unchanged. Remove it as it is no longer needed. Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-23powerpc/powernv: Define and set POWER9 HFSCR doorbell bitMichael Neuling
Define and set the POWER9 HFSCR doorbell bit so that guests can use msgsndp. ISA 3.0 calls this MSGP, so name it accordingly in the code. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-23powerpc/reg: Add definition for LPCR_PECE_HVEEMichael Ellerman
ISA 3.0 defines a new PECE (Power-saving mode Exit Cause Enable) field in the LPCR (Logical Partitioning Control Register), called LPCR_PECE_HVEE (Hypervisor Virtualization Exit Enable). KVM code will need to know about this bit, so add a definition for it. Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-23powerpc/64: Define new ISA v3.00 logical PVR value and PCR register valueSuraj Jitindar Singh
ISA 3.00 adds the logical PVR value 0x0f000005, so add a definition for this. Define PCR_ARCH_207 to reflect ISA 2.07 compatibility mode in the processor compatibility register (PCR). [paulus@ozlabs.org - moved dummy PCR_ARCH_300 value into next patch] Signed-off-by: Suraj Jitindar Singh <sjitindarsingh@gmail.com> Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-23powerpc/powernv: Define real-mode versions of OPAL XICS accessorsPaul Mackerras
This defines real-mode versions of opal_int_get_xirr(), opal_int_eoi() and opal_int_set_mfrr(), for use by KVM real-mode code. It also exports opal_int_set_mfrr() so that the modular part of KVM can use it to send IPIs. Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-23powerpc/64: Provide functions for accessing POWER9 partition tablePaul Mackerras
POWER9 requires the host to set up a partition table, which is a table in memory indexed by logical partition ID (LPID) which contains the pointers to page tables and process tables for the host and each guest. This factors out the initialization of the partition table into a single function. This code was previously duplicated between hash_utils_64.c and pgtable-radix.c. This provides a function for setting a partition table entry, which is used in early MMU initialization, and will be used by KVM whenever a guest is created. This function includes a tlbie instruction which will flush all TLB entries for the LPID and all caches of the partition table entry for the LPID, across the system. This also moves a call to memblock_set_current_limit(), which was in radix_init_partition_table(), but has nothing to do with the partition table. By analogy with the similar code for hash, the call gets moved to near the end of radix__early_init_mmu(). It now gets called when running as a guest, whereas previously it would only be called if the kernel is running as the host. Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-22powerpc/mm/coproc: Handle bad address on coproc slb faultAneesh Kumar K.V
VSID 0 is bad address. Don't create slb entries on coproc fault for bad address Signed-off-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com> Reviewed-by: Balbir Singh <bsingharora@gmail.com> Reviewed-by: Ian Munsie <imunsie@au1.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-22powerpc/eeh: Refactor EEH PE reset functionsRussell Currey
eeh_pe_reset and eeh_reset_pe are two different functions in the same file which do mostly the same thing. Not only is this confusing, but potentially causes disrepancies in functionality, notably eeh_reset_pe as it does not check return values for failure. Refactor this into the following: - eeh_pe_reset(): stays as is, performs a single operation, exported - eeh_pe_reset_full(): new, full reset process that calls eeh_pe_reset() - eeh_reset_pe(): removed and replaced by eeh_pe_reset_full() - eeh_reset_pe_once(): removed Signed-off-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-22powerpc/pci: Always print PHB and PE numbers as hexadecimalRussell Currey
PHB, PE (and by association MVE) numbers are printed as a mix of decimal and hexadecimal throughout the kernel. This can be misleading, so make them all hexadecimal. Standardising on hex instead of dec because: - PHB numbers are presented in hex in sysfs/debugfs (and lspci, etc) - PE numbers are presented as hex in sysfs and parsed in hex in debugfs The only place I think this could cause confusing are the messages during boot, i.e. pci 000a:01 : [PE# 000] Secondary bus 1 associated with PE#0 which can be a quick way to check PE numbers. pe_level_printk() will only print two characters instead of three, so the above would be pci 000a:01 : [PE# 00] Secondary bus 1 associated with PE#0 which gives a hint it's in hex. Signed-off-by: Russell Currey <ruscur@russell.cc> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-22powerpc/powernv: Don't warn on PE init if unfreeze is unsupportedRussell Currey
Whenever a PE is initialised in powernv, opal_pci_eeh_freeze_clear() is called. This is to remove any existing freeze, and has no negative side effects if the PE is already in an unfrozen state. On PHB backends that don't support this operation and return OPAL_UNSUPPORTED, this creates a scary and misleading warning message. Skip the warning message on init if OPAL_UNSUPPORTED is returned. As far as I'm aware, this currently only affects NPUs. Fixes: 313483d ("powerpc/powernv: Unfreeze PE on allocation") Signed-off-by: Russell Currey <ruscur@russell.cc> Acked-by: Gavin Shan <gwshan@linux.vnet.ibm.com> Reviewed-by: Andrew Donnellan <andrew.donnellan@au1.ibm.com> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>
2016-11-22powerpc/64: Add some more SPRs and SPR bits for POWER9Paul Mackerras
These definitions will be needed by KVM. Signed-off-by: Paul Mackerras <paulus@ozlabs.org> Signed-off-by: Michael Ellerman <mpe@ellerman.id.au>