Commit graph

67 commits

Author SHA1 Message Date
Ard Biesheuvel
9f80ccda53 ARM: 9180/1: Thumb2: align ALT_UP() sections in modules sufficiently
When building for Thumb2, the .alt.smp.init sections that are emitted by
the ALT_UP() patching code may not be 32-bit aligned, even though the
fixup_smp_on_up() routine expects that. This results in alignment faults
at module load time, which need to be fixed up by the fault handler.

So let's align those sections explicitly, and prevent this from occurring.

Cc: <stable@vger.kernel.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
2022-01-19 11:10:54 +00:00
Ard Biesheuvel
18ed1c01a7 ARM: smp: Enable THREAD_INFO_IN_TASK
Now that we no longer rely on thread_info living at the base of the task
stack to be able to access the 'current' pointer, we can wire up the
generic support for moving thread_info into the task struct itself.

Note that this requires us to update the cpu field in thread_info
explicitly, now that the core code no longer does so. Ideally, we would
switch the percpu code to access the cpu field in task_struct instead,
but this unleashes #include circular dependency hell.

Co-developed-by: Keith Packard <keithpac@amazon.com>
Signed-off-by: Keith Packard <keithpac@amazon.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
2021-09-27 16:54:02 +02:00
Ard Biesheuvel
50596b7559 ARM: smp: Store current pointer in TPIDRURO register if available
Now that the user space TLS register is assigned on every return to user
space, we can use it to keep the 'current' pointer while running in the
kernel. This removes the need to access it via thread_info, which is
located at the base of the stack, but will be moved out of there in a
subsequent patch.

Use the __builtin_thread_pointer() helper when available - this will
help GCC understand that reloading the value within the same function is
not necessary, even when using the per-task stack protector (which also
generates accesses via the TLS register). For example, the generated
code below loads TPIDRURO only once, and uses it to access both the
stack canary and the preempt_count fields.

<do_one_initcall>:
       e92d 41f0       stmdb   sp!, {r4, r5, r6, r7, r8, lr}
       ee1d 4f70       mrc     15, 0, r4, cr13, cr0, {3}
       4606            mov     r6, r0
       b094            sub     sp, #80 ; 0x50
       f8d4 34e8       ldr.w   r3, [r4, #1256] ; 0x4e8  <- stack canary
       9313            str     r3, [sp, #76]   ; 0x4c
       f8d4 8004       ldr.w   r8, [r4, #4]             <- preempt count

Co-developed-by: Keith Packard <keithpac@amazon.com>
Signed-off-by: Keith Packard <keithpac@amazon.com>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Tested-by: Amit Daniel Kachhap <amit.kachhap@arm.com>
2021-09-27 16:54:02 +02:00
Ard Biesheuvel
6468e898c6 ARM: 9039/1: assembler: generalize byte swapping macro into rev_l
Take the 4 instruction byte swapping sequence from the decompressor's
head.S, and turn it into a rev_l GAS macro for general use. While
at it, make it use the 'rev' instruction when compiling for v6 or
later.

Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>
Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2021-02-01 19:41:30 +00:00
Ard Biesheuvel
450abd38fe ARM: kernel: use relative references for UP/SMP alternatives
Currently, the .alt.smp.init section contains the virtual addresses
of the patch sites. Since patching may occur both before and after
switching into virtual mode, this requires some manual handling of
the address when applying the UP alternative.

Let's simplify this by using relative offsets in the table entries:
this allows us to simply add each entry's address to its contents,
regardless of whether we are running in virtual mode or not.

Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-10-28 17:05:39 +01:00
Ard Biesheuvel
0b1674638a ARM: assembler: introduce adr_l, ldr_l and str_l macros
Like arm64, ARM supports position independent code sequences that
produce symbol references with a greater reach than the ordinary
adr/ldr instructions. Since on ARM, the adrl pseudo-instruction is
only supported in ARM mode (and not at all when using Clang), having
a adr_l macro like we do on arm64 is useful, and increases symmetry
as well.

Currently, we use open coded instruction sequences involving literals
and arithmetic operations. Instead, we can use movw/movt pairs on v7
CPUs, circumventing the D-cache entirely.

E.g., on v7+ CPUs, we can emit a PC-relative reference as follows:

       movw         <reg>, #:lower16:<sym> - (1f + 8)
       movt         <reg>, #:upper16:<sym> - (1f + 8)
  1:   add          <reg>, <reg>, pc

For older CPUs, we can emit the literal into a subsection, allowing it
to be emitted out of line while retaining the ability to perform
arithmetic on label offsets.

E.g., on pre-v7 CPUs, we can emit a PC-relative reference as follows:

       ldr          <reg>, 2f
  1:   add          <reg>, <reg>, pc
       .subsection  1
  2:   .long        <sym> - (1b + 8)
       .previous

This is allowed by the assembler because, unlike ordinary sections,
subsections are combined into a single section in the object file, and
so the label references are not true cross-section references that are
visible as relocations. (Subsections have been available in binutils
since 2004 at least, so they should not cause any issues with older
toolchains.)

So use the above to implement the macros mov_l, adr_l, ldr_l and str_l,
all of which will use movw/movt pairs on v7 and later CPUs, and use
PC-relative literals otherwise.

Reviewed-by: Nicolas Pitre <nico@fluxnic.net>
Reviewed-by: Linus Walleij <linus.walleij@linaro.org>
Signed-off-by: Ard Biesheuvel <ardb@kernel.org>
2020-10-28 16:59:43 +01:00
Linus Torvalds
c2b0fc847f ARM updates for 5.8-rc1:
- remove a now unnecessary usage of the KERNEL_DS for
   sys_oabi_epoll_ctl()
 - update my email address in a number of drivers
 - decompressor EFI updates from Ard Biesheuvel
 - module unwind section handling updates
 - sparsemem Kconfig cleanups
 - make act_mm macro respect THREAD_SIZE
 -----BEGIN PGP SIGNATURE-----
 
 iQIzBAABCgAdFiEEuNNh8scc2k/wOAE+9OeQG+StrGQFAl7VacAACgkQ9OeQG+St
 rGRHaA//Z8m+8LnSd+sqwrlEZYVj6IdPoihOhVZRsEhp4Fb09ZENOxL06W4lynHy
 tPcs4YAqLyp3Xmn+pk5NuH3SBoFGPOuUxseMpQKyT2ZnA126LB/3sW+xFPSbCayY
 gBHT7QhX6MxJEwxCxgvp2McOs6F55rYENcjozQ+DQMiNY5MTm0fKgGgbn1kpzglz
 7N2U7MR9ulXTCof3hZolQWBMOKa6LRldG7C3ajPITeOtk+vjyAPobqrkbzRDsIPV
 09j6BruFQoUbuyxtycNC0x+BDotrS/NN5OyhR07eJR5R0QNDW+qn8iqrkkVQUQsr
 mZpTR8CelzLL2+/1CDY2KrweY13eFbDoxiTVJl9aqCdlOsJKxwk1yv4HrEcpbBoK
 vtKwPDxPIKxFeJSCJX3xFjg9g6mRrBJ5CItPOThVgEqNt/dsbogqXlX4UhIjXzPs
 DBbeQ+EEZgNg7Ws/EwXIwtM8ZPc+bZZY8fskJd0gRCjbiCtstXXNjsHRd1vZ16KM
 yytpDxEIB7A+6lxcnV80VSCjD++A//kVThZ5kBl+ec1HOxRSyYOGIMGUMZhuyfE8
 f4xE3KVVsbqHGyh94C6tDLx73XgkmjfNx8YAgGRss+fQBoJbmwkJ0fDy4MhKlznD
 UnVcOXSjs7Iqih7R+icAtbIkbo1EUF5Mwu2I3SEZ/FOJmzEbbCY=
 =vMHE
 -----END PGP SIGNATURE-----

Merge tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm

Pull ARM updates from Russell King:

 - remove a now unnecessary usage of the KERNEL_DS for
   sys_oabi_epoll_ctl()

 - update my email address in a number of drivers

 - decompressor EFI updates from Ard Biesheuvel

 - module unwind section handling updates

 - sparsemem Kconfig cleanups

 - make act_mm macro respect THREAD_SIZE

* tag 'for-linus' of git://git.armlinux.org.uk/~rmk/linux-arm:
  ARM: 8980/1: Allow either FLATMEM or SPARSEMEM on the multiplatform build
  ARM: 8979/1: Remove redundant ARCH_SPARSEMEM_DEFAULT setting
  ARM: 8978/1: mm: make act_mm() respect THREAD_SIZE
  ARM: decompressor: run decompressor in place if loaded via UEFI
  ARM: decompressor: move GOT into .data for EFI enabled builds
  ARM: decompressor: defer loading of the contents of the LC0 structure
  ARM: decompressor: split off _edata and stack base into separate object
  ARM: decompressor: move headroom variable out of LC0
  ARM: 8976/1: module: allow arch overrides for .init section names
  ARM: 8975/1: module: fix handling of unwind init sections
  ARM: 8974/1: use SPARSMEM_STATIC when SPARSEMEM is enabled
  ARM: 8971/1: replace the sole use of a symbol with its definition
  ARM: 8969/1: decompressor: simplify libfdt builds
  Update rmk's email address in various drivers
  ARM: compat: remove KERNEL_DS usage in sys_oabi_epoll_ctl()
2020-06-01 15:36:32 -07:00
Russell King
747ffc2fcf ARM: uaccess: consolidate uaccess asm to asm/uaccess-asm.h
Consolidate the user access assembly code to asm/uaccess-asm.h.  This
moves the csdb, check_uaccess, uaccess_mask_range_ptr, uaccess_enable,
uaccess_disable, uaccess_save, uaccess_restore macros, and creates two
new ones for exception entry and exit - uaccess_entry and uaccess_exit.

This makes the uaccess_save and uaccess_restore macros private to
asm/uaccess-asm.h.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-05-03 17:30:24 +01:00
Jian Cai
a780e485b5 ARM: 8971/1: replace the sole use of a symbol with its definition
ALT_UP_B macro sets symbol up_b_offset via .equ to an expression
involving another symbol. The macro gets expanded twice when
arch/arm/kernel/sleep.S is assembled, creating a scenario where
up_b_offset is set to another expression involving symbols while its
current value is based on symbols. LLVM integrated assembler does not
allow such cases, and based on the documentation of binutils, "Values
that are based on expressions involving other symbols are allowed, but
some targets may restrict this to only being done once per assembly", so
it may be better to avoid such cases as it is not clearly stated which
targets should support or disallow them. The fix in this case is simple,
as up_b_offset has only one use, so we can replace the use with the
definition and get rid of up_b_offset.

 Link:https://github.com/ClangBuiltLinux/linux/issues/920

 Reviewed-by: Stefan Agner <stefan@agner.ch>

Reviewed-by: Nick Desaulniers <ndesaulniers@google.com>
Signed-off-by: Jian Cai <caij2003@gmail.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2020-04-29 13:30:20 +01:00
Thomas Gleixner
d2912cb15b treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500
Based on 2 normalized pattern(s):

  this program is free software you can redistribute it and or modify
  it under the terms of the gnu general public license version 2 as
  published by the free software foundation

  this program is free software you can redistribute it and or modify
  it under the terms of the gnu general public license version 2 as
  published by the free software foundation #

extracted by the scancode license scanner the SPDX license identifier

  GPL-2.0-only

has been chosen to replace the boilerplate/reference in 4122 file(s).

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Enrico Weigelt <info@metux.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-19 17:09:55 +02:00
Stefan Agner
c001899a5d ARM: 8843/1: use unified assembler in headers
Use unified assembler syntax (UAL) in headers. Divided syntax is
considered deprecated. This will also allow to build the kernel
using LLVM's integrated assembler.

Signed-off-by: Stefan Agner <stefan@agner.ch>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2019-02-26 11:26:06 +00:00
Vincent Whitchurch
f441882a52 ARM: 8812/1: Optimise copy_{from/to}_user for !CPU_USE_DOMAINS
ARMv6+ processors do not use CONFIG_CPU_USE_DOMAINS and use privileged
ldr/str instructions in copy_{from/to}_user.  They are currently
unnecessarily using single ldr/str instructions and can use ldm/stm
instructions instead like memcpy does (but with appropriate fixup
tables).

This speeds up a "dd if=foo of=bar bs=32k" on a tmpfs filesystem by
about 4% on my Cortex-A9.

before:134217728 bytes (128.0MB) copied, 0.543848 seconds, 235.4MB/s
before:134217728 bytes (128.0MB) copied, 0.538610 seconds, 237.6MB/s
before:134217728 bytes (128.0MB) copied, 0.544356 seconds, 235.1MB/s
before:134217728 bytes (128.0MB) copied, 0.544364 seconds, 235.1MB/s
before:134217728 bytes (128.0MB) copied, 0.537130 seconds, 238.3MB/s
before:134217728 bytes (128.0MB) copied, 0.533443 seconds, 240.0MB/s
before:134217728 bytes (128.0MB) copied, 0.545691 seconds, 234.6MB/s
before:134217728 bytes (128.0MB) copied, 0.534695 seconds, 239.4MB/s
before:134217728 bytes (128.0MB) copied, 0.540561 seconds, 236.8MB/s
before:134217728 bytes (128.0MB) copied, 0.541025 seconds, 236.6MB/s

 after:134217728 bytes (128.0MB) copied, 0.520445 seconds, 245.9MB/s
 after:134217728 bytes (128.0MB) copied, 0.527846 seconds, 242.5MB/s
 after:134217728 bytes (128.0MB) copied, 0.519510 seconds, 246.4MB/s
 after:134217728 bytes (128.0MB) copied, 0.527231 seconds, 242.8MB/s
 after:134217728 bytes (128.0MB) copied, 0.525030 seconds, 243.8MB/s
 after:134217728 bytes (128.0MB) copied, 0.524236 seconds, 244.2MB/s
 after:134217728 bytes (128.0MB) copied, 0.523659 seconds, 244.4MB/s
 after:134217728 bytes (128.0MB) copied, 0.525018 seconds, 243.8MB/s
 after:134217728 bytes (128.0MB) copied, 0.519249 seconds, 246.5MB/s
 after:134217728 bytes (128.0MB) copied, 0.518527 seconds, 246.9MB/s

Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Vincent Whitchurch <vincent.whitchurch@axis.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-11-12 10:51:59 +00:00
Russell King
3e98d24098 Merge branches 'fixes', 'misc' and 'spectre' into for-next 2018-10-10 13:53:33 +01:00
Julien Thierry
afaf6838f4 ARM: 8796/1: spectre-v1,v1.1: provide helpers for address sanitization
Introduce C and asm helpers to sanitize user address, taking the
address range they target into account.

Use asm helper for existing sanitization in __copy_from_user().

Signed-off-by: Julien Thierry <julien.thierry@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-10-05 10:51:15 +01:00
Russell King
c61b466d4f Merge branches 'fixes', 'misc' and 'spectre' into for-linus
Conflicts:
	arch/arm/include/asm/uaccess.h

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-08-13 16:28:50 +01:00
Russell King
a3c0f84765 ARM: spectre-v1: mitigate user accesses
Spectre variant 1 attacks are about this sequence of pseudo-code:

	index = load(user-manipulated pointer);
	access(base + index * stride);

In order for the cache side-channel to work, the access() must me made
to memory which userspace can detect whether cache lines have been
loaded.  On 32-bit ARM, this must be either user accessible memory, or
a kernel mapping of that same user accessible memory.

The problem occurs when the load() speculatively loads privileged data,
and the subsequent access() is made to user accessible memory.

Any load() which makes use of a user-maniplated pointer is a potential
problem if the data it has loaded is used in a subsequent access.  This
also applies for the access() if the data loaded by that access is used
by a subsequent access.

Harden the get_user() accessors against Spectre attacks by forcing out
of bounds addresses to a NULL pointer.  This prevents get_user() being
used as the load() step above.  As a side effect, put_user() will also
be affected even though it isn't implicated.

Also harden copy_from_user() by redoing the bounds check within the
arm_copy_from_user() code, and NULLing the pointer if out of bounds.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-08-02 17:41:38 +01:00
Russell King
0ac000e867 Merge branches 'fixes', 'misc' and 'spectre' into for-linus 2018-06-05 10:03:27 +01:00
Russell King
a78d156587 ARM: spectre-v1: add speculation barrier (csdb) macros
Add assembly and C macros for the new CSDB instruction.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Boot-tested-by: Tony Lindgren <tony@atomide.com>
Reviewed-by: Tony Lindgren <tony@atomide.com>
2018-05-31 23:27:16 +01:00
Masami Hiramatsu
0d73c3f8e7 ARM: 8772/1: kprobes: Prohibit kprobes on get_user functions
Since do_undefinstr() uses get_user to get the undefined
instruction, it can be called before kprobes processes
recursive check. This can cause an infinit recursive
exception.
Prohibit probing on get_user functions.

Fixes: 24ba613c9d ("ARM kprobes: core code")
Signed-off-by: Masami Hiramatsu <mhiramat@kernel.org>
Cc: stable@vger.kernel.org
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2018-05-19 11:35:56 +01:00
Russell King
8bafae202c ARM: BUG if jumping to usermode address in kernel mode
Detect if we are returning to usermode via the normal kernel exit paths
but the saved PSR value indicates that we are in kernel mode.  This
could occur due to corrupted stack state, which has been observed with
"ftracetest".

This ensures that we catch the problem case before we get to user code.

Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2017-11-26 15:41:39 +00:00
Arnd Bergmann
ffa47aa678 ARM: Prepare for randomized task_struct
With the new task struct randomization, we can run into a build
failure for certain random seeds, which will place fields beyond
the allow immediate size in the assembly:

arch/arm/kernel/entry-armv.S: Assembler messages:
arch/arm/kernel/entry-armv.S:803: Error: bad immediate value for offset (4096)

Only two constants in asm-offset.h are affected, and I'm changing
both of them here to work correctly in all configurations.

One more macro has the problem, but is currently unused, so this
removes it instead of adding complexity.

Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
[kees: Adjust commit log slightly]
Signed-off-by: Kees Cook <keescook@chromium.org>
2017-06-30 12:00:50 -07:00
Vladimir Murzin
b2bf482a50 ARM: 8605/1: V7M: fix notrace variant of save_and_disable_irqs
Commit 8e43a905 "ARM: 7325/1: fix v7 boot with lockdep enabled"
introduced notrace variant of save_and_disable_irqs to balance notrace
variant of restore_irqs; however V7M case has been missed. It was not
noticed because cache-v7.S the only place where notrace variant is used.
So fix it, since we are going to extend V7 cache routines to handle V7M
case too.

Signed-off-by: Vladimir Murzin <vladimir.murzin@arm.com>
Tested-by: Andras Szemzo <sza@esh.hu>
Tested-by: Joachim Eastwood <manabian@gmail.com>
Tested-by: Alexandre TORGUE <alexandre.torgue@st.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2016-09-06 15:51:07 +01:00
Russell King
e6a9dc6129 ARM: introduce svc_pt_regs structure
Since the privileged mode pt_regs are an extended version of the saved
userland pt_regs, introduce a new svc_pt_regs structure to describe this
layout.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2016-06-22 19:54:52 +01:00
Russell King
5745eef6b8 ARM: rename S_FRAME_SIZE to PT_REGS_SIZE
S_FRAME_SIZE is no longer the size of the kernel stack frame, so this
name is misleading.  It is the size of the kernel pt_regs structure.
Name it so.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@armlinux.org.uk>
2016-06-22 19:54:28 +01:00
Linus Torvalds
57e6bbcb4b Merge branch 'fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-arm
Pull ARM fixes from Russell King:
 "A number of fixes for the merge window, fixing a number of cases
  missed when testing the uaccess code, particularly cases which only
  show up with certain compiler versions"

* 'fixes' of git://ftp.arm.linux.org.uk/~rmk/linux-arm:
  ARM: 8431/1: fix alignement of __bug_table section entries
  arm/xen: Enable user access to the kernel before issuing a privcmd call
  ARM: domains: add memory dependencies to get_domain/set_domain
  ARM: domains: thread_info.h no longer needs asm/domains.h
  ARM: uaccess: fix undefined instruction on ARMv7M/noMMU
  ARM: uaccess: remove unneeded uaccess_save_and_disable macro
  ARM: swpan: fix nwfpe for uaccess changes
  ARM: 8429/1: disable GCC SRA optimization
2015-09-14 12:24:10 -07:00
Russell King
296254f322 ARM: uaccess: remove unneeded uaccess_save_and_disable macro
This macro is never referenced, remove it.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-09-09 23:26:40 +01:00
Russell King
40d3f02851 Merge branches 'cleanup', 'fixes', 'misc', 'omap-barrier' and 'uaccess' into for-linus 2015-09-03 15:28:37 +01:00
Russell King
a5e090acbf ARM: software-based priviledged-no-access support
Provide a software-based implementation of the priviledged no access
support found in ARMv8.1.

Userspace pages are mapped using a different domain number from the
kernel and IO mappings.  If we switch the user domain to "no access"
when we enter the kernel, we can prevent the kernel from touching
userspace.

However, the kernel needs to be able to access userspace via the
various user accessor functions.  With the wrapping in the previous
patch, we can temporarily enable access when the kernel needs user
access, and re-disable it afterwards.

This allows us to trap non-intended accesses to userspace, eg, caused
by an inadvertent dereference of the LIST_POISON* values, which, with
appropriate user mappings setup, can be made to succeed.  This in turn
can allow use-after-free bugs to be further exploited than would
otherwise be possible.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-08-26 20:34:24 +01:00
Russell King
2190fed67b ARM: entry: provide uaccess assembly macro hooks
Provide hooks into the kernel entry and exit paths to permit control
of userspace visibility to the kernel.  The intended use is:

- on entry to kernel from user, uaccess_disable will be called to
  disable userspace visibility
- on exit from kernel to user, uaccess_enable will be called to
  enable userspace visibility
- on entry from a kernel exception, uaccess_save_and_disable will be
  called to save the current userspace visibility setting, and disable
  access
- on exit from a kernel exception, uaccess_restore will be called to
  restore the userspace visibility as it was before the exception
  occurred.

These hooks allows us to keep userspace visibility disabled for the
vast majority of the kernel, except for localised regions where we
want to explicitly access userspace.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-08-26 20:27:02 +01:00
Russell King
3302caddf1 ARM: entry: efficiency cleanups
Make the "fast" syscall return path fast again.  The addition of IRQ
tracing and context tracking has made this path grossly inefficient.
We can do much better if these options are enabled if we save the
syscall return code on the stack - we then don't need to save a bunch
of registers around every single callout to C code.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-08-25 10:32:48 +01:00
Russell King
01e09a2816 ARM: entry: get rid of asm_trace_hardirqs_on_cond
There's no need for this macro, it can use a default for the
condition argument.

Acked-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-08-25 10:32:46 +01:00
Russell King
14327c6628 ARM: replace BSYM() with badr assembly macro
BSYM() was invented to allow us to work around a problem with the
assembler, where local symbols resolved by the assembler for the 'adr'
instruction did not take account of their ISA.

Since we don't want BSYM() used elsewhere, replace BSYM() with a new
macro 'badr', which is like the 'adr' pseudo-op, but with the BSYM()
mechanics integrated into it.  This ensures that the BSYM()-ification
is only used in conjunction with 'adr'.

Acked-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-05-08 17:33:50 +01:00
Russell King
89c6bc5884 ARM: allow 16-bit instructions in ALT_UP()
Allow ALT_UP() to cope with a 16-bit Thumb instruction by automatically
inserting a following nop instruction.  This allows us to care less
about getting the assembler to emit a 32-bit thumb instruction.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2015-04-14 22:26:51 +01:00
Russell King
6ebbf2ce43 ARM: convert all "mov.* pc, reg" to "bx reg" for ARMv6+
ARMv6 and greater introduced a new instruction ("bx") which can be used
to return from function calls.  Recent CPUs perform better when the
"bx lr" instruction is used rather than the "mov pc, lr" instruction,
and this sequence is strongly recommended to be used by the ARM
architecture manual (section A.4.1.1).

We provide a new macro "ret" with all its variants for the condition
code which will resolve to the appropriate instruction.

Rather than doing this piecemeal, and miss some instances, change all
the "mov pc" instances to use the new macro, with the exception of
the "movs" instruction and the kprobes code.  This allows us to detect
the "mov pc, lr" case and fix it up - and also gives us the possibility
of deploying this for other registers depending on the CPU selection.

Reported-by: Will Deacon <will.deacon@arm.com>
Tested-by: Stephen Warren <swarren@nvidia.com> # Tegra Jetson TK1
Tested-by: Robert Jarzmik <robert.jarzmik@free.fr> # mioa701_bootresume.S
Tested-by: Andrew Lunn <andrew@lunn.ch> # Kirkwood
Tested-by: Shawn Guo <shawn.guo@freescale.com>
Tested-by: Tony Lindgren <tony@atomide.com> # OMAPs
Tested-by: Gregory CLEMENT <gregory.clement@free-electrons.com> # Armada XP, 375, 385
Acked-by: Sekhar Nori <nsekhar@ti.com> # DaVinci
Acked-by: Christoffer Dall <christoffer.dall@linaro.org> # kvm/hyp
Acked-by: Haojian Zhuang <haojian.zhuang@gmail.com> # PXA3xx
Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> # Xen
Tested-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de> # ARMv7M
Tested-by: Simon Horman <horms+renesas@verge.net.au> # Shmobile
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-18 12:29:04 +01:00
Andrey Ryabinin
9a2b51b6ca ARM: 8078/1: get rid of hardcoded assumptions about kernel stack size
Changing kernel stack size on arm is not as simple as it should be:
1) THREAD_SIZE macro doesn't respect PAGE_SIZE and THREAD_SIZE_ORDER
2) stack size is hardcoded in get_thread_info macro

This patch fixes it by calculating THREAD_SIZE and thread_info address
taking into account PAGE_SIZE and THREAD_SIZE_ORDER.

Now changing stack size becomes simply changing THREAD_SIZE_ORDER.

Signed-off-by: Andrey Ryabinin <a.ryabinin@samsung.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-07-01 15:05:47 +01:00
Lorenzo Pieralisi
0e0779da22 ARM: 8053/1: kernel: sleep: restore HYP mode configuration in cpu_resume
On CPUs with virtualization extensions the kernel installs HYP mode
configuration on both primary and secondary cpus upon cold boot.

On platforms where CPUs are shutdown in idle paths (ie CPU core gating),
when a CPU resumes from low-power states it currently does not execute
code that reinstalls the HYP configuration, which means that the kernel
cannot run eg KVM properly on such machines.

This patch, mirroring cold-boot behaviour, executes position independent
code that reinstalls HYP configuration and drops to SVC mode safely on
warmboot, so that deep idle states can be enabled in kernel running as
hosts on platforms with power management HW.

Cc: Christoffer Dall <christoffer.dall@linaro.org>
Cc: Dave Martin <dave.martin@arm.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Acked-by: Marc Zyngier <marc.zyngier@arm.com>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-05-25 23:49:27 +01:00
Catalin Marinas
0b1f68e836 ARM: 8018/1: Add {inc,dec}_preempt_count asm macros
The patch adds asm macros for inc_preempt_count and dec_preempt_count_ti
(which also gets the current thread_info) instead of open-coding them in
arch/arm/vfp/*.S files.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Arun KS <getarunks@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-09 13:08:08 +01:00
Catalin Marinas
39ad04ccd6 ARM: 8017/1: Move asm macro get_thread_info to asm/assembler.h
asm/assembler.h is a better place for this macro since it is used by
asm files outside arch/arm/kernel/

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Tested-by: Arun KS <getarunks@gmail.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-04-09 13:08:07 +01:00
Victor Kamensky
d98b90ea22 ARM: 7990/1: asm: rename logical shift macros push pull into lspush lspull
Renames logical shift macros, 'push' and 'pull', defined in
arch/arm/include/asm/assembler.h, into 'lspush' and 'lspull'.
That eliminates name conflict between 'push' logical shift macro
and 'push' instruction mnemonic. That allows assembler.h to be
included in .S files that use 'push' instruction.

Suggested-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Victor Kamensky <victor.kamensky@linaro.org>
Acked-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2014-02-25 11:33:57 +00:00
Ben Dooks
457c2403c5 ARM: asm: Add ARM_BE8() assembly helper
Add ARM_BE8() helper to wrap any code conditional on being
compile when CONFIG_ARM_ENDIAN_BE8 is selected and convert
existing places where this is to use it.

Acked-by: Nicolas Pitre <nico@linaro.org>
Reviewed-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk>
2013-10-19 20:46:33 +01:00
Will Deacon
3ea128065e ARM: barrier: allow options to be passed to memory barrier instructions
On ARMv7, the memory barrier instructions take an optional `option'
field which can be used to constrain the effects of a memory barrier
based on shareability and access type.

This patch allows the caller to pass these options if required, and
updates the smp_*() barriers to request inner-shareable barriers,
affecting only stores for the _wmb variant. wmb() is also changed to
use the -st version of dsb.

Reported-by: Albin Tonnerre <albin.tonnerre@arm.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-08-12 12:25:44 +01:00
Catalin Marinas
55bdd69411 ARM: Add base support for ARMv7-M
This patch adds the base support for the ARMv7-M
architecture. It consists of the corresponding arch/arm/mm/ files and
various #ifdef's around the kernel. Exception handling is implemented by
a subsequent patch.

[ukleinek: squash in some changes originating from commit

b5717ba (Cortex-M3: Add support for the Microcontroller Prototyping System)

from the v2.6.33-arm1 patch stack, port to post 3.6, drop zImage
support, drop reorganisation of pt_regs, assert CONFIG_CPU_V7M doesn't
leak into installed headers and a few cosmetic changes]

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Jonathan Austin <jonathan.austin@arm.com>
Tested-by: Jonathan Austin <jonathan.austin@arm.com>
Signed-off-by: Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
2013-04-17 21:38:10 +02:00
Russell King
8e9c24a2b2 ARM: virt: avoid clobbering lr when forcing svc mode
The safe_svcmode_maskall macro is used to ensure that we are running in
svc mode, causing an exception return from hvc mode if required.

This patch removes the unneeded lr clobber from the macro and operates
entirely on the temporary parameter register instead.

Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
[will: updated comment]
Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-01-10 21:09:31 +00:00
Dave Martin
1ecec696c8 ARM: 7599/1: head: Remove boot-time HYP mode check for v5 and below
The kernel can only be entered on HYP mode on CPUs which actually
support it, i.e.  >= ARMv7.  pre-v6 platform support cannot coexist
in the same kernel as support for v7 and higher, so there is no
advantage in having the HYP mode check on pre-v6 hardware.

At least one pre-v6 board is known to fail when the HYP mode check
code is present, although the exact cause remains unknown and may
be unrelated.  [1]

This patch restores the old behaviour for pre-v6 platforms, whereby
the CPSR is forced directly to SVC mode with IRQs and FIQs masked.
All kernels capable of booting on v7 hardware will retain the
check, so this should not impair functionality.

[1] http://lists.arm.linux.org.uk/lurker/message/20121130.013814.19218413.en.html
([ARM] head.S change broke platform device registration?)

Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-12-11 00:19:29 +00:00
Marc Zyngier
2a552d5e63 ARM: 7549/1: HYP: fix boot on some ARM1136 cores
It appears that performing a "movs pc, lr" to force the kernel into
SVC mode on the OMAP2420 (ARM1136) prevents the platform from booting
correctly (change introduced in 80c59da [ARM: virt: allow the kernel
to be entered in HYP mode]).

While the reason it fails is not understood yet (the same code runs
fine on the OMAP2430, ARM1136 as well), partially revert that change
for platforms that do not enter in HYP mode, preserving the new
feature and restoring a working kernel on the OMAP2420.

Reported-by: Tony Lindgren <tony@atomide.com>
Acked-by: Nicolas Pitre <nico@linaro.org>
Tested-by: Tony Lindgren <tony@atomide.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-10-09 12:11:34 +01:00
Russell King
648f3b6998 Merge branch 'hyp-boot-mode-rmk' of git://git.kernel.org/pub/scm/linux/kernel/git/maz/arm-platforms into devel-stable 2012-09-30 09:03:44 +01:00
Dave Martin
80c59dafb1 ARM: virt: allow the kernel to be entered in HYP mode
This patch does two things:

  * Ensure that asynchronous aborts are masked at kernel entry.
    The bootloader should be masking these anyway, but this reduces
    the damage window just in case it doesn't.

  * Enter svc mode via exception return to ensure that CPU state is
    properly serialised.  This does not matter when switching from
    an ordinary privileged mode ("PL1" modes in ARMv7-AR rev C
    parlance), but it potentially does matter when switching from a
    another privileged mode such as hyp mode.

This should allow the kernel to boot safely either from svc mode or
hyp mode, even if no support for use of the ARM Virtualization
Extensions is built into the kernel.

Signed-off-by: Dave Martin <dave.martin@linaro.org>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
2012-09-19 08:32:50 +01:00
Russell King
8404663f81 ARM: 7527/1: uaccess: explicitly check __user pointer when !CPU_USE_DOMAINS
The {get,put}_user macros don't perform range checking on the provided
__user address when !CPU_HAS_DOMAINS.

This patch reworks the out-of-line assembly accessors to check the user
address against a specified limit, returning -EFAULT if is is out of
range.

[will: changed get_user register allocation to match put_user]
[rmk: fixed building on older ARM architectures]

Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Cc: stable@vger.kernel.org
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2012-09-09 17:28:47 +01:00
Linus Torvalds
820d41cf0c ARM: cleanups of io includes
Rob Herring has done a sweeping change cleaning up all of the mach/io.h includes,
 moving some of the oft-repeated macros to a common location and removing a bunch of
 boiler plate. This is another step closer to a common zImage for multiple platforms.
 -----BEGIN PGP SIGNATURE-----
 Version: GnuPG v1.4.11 (GNU/Linux)
 
 iQIcBAABAgAGBQJPcpqHAAoJEIwa5zzehBx3xCMP/2evrPQyorzMBztrFB4Ry9Ol
 qNkSVNsemZjdtkY2dnJv+zJ/Xb0PPDU9EuBHr/SpqmVrRZEZeJND42wZK/OTFCBZ
 Ufi7KP1qE30daO5H3YmL+58/Ixir5fTHqggqolHhTcEYU2hnHgLBI4rIFu92kSO7
 TMyrAUs14jSkTVZc6HSF83w3PfQWhMzWvspJVHQ6RebZRruETAr7v9weVMbgxcDk
 jQ5XJ9y73rGs2AF8bZTpUdFPzkcac7UiHn3/XyqoZs8RNCL98BGpskzhILyTARf5
 X90c9mqQF+AEbb9QSDDd52uYFsJ/5COJvWdlExRI9gZZDI8Pd05ijZBR9IdGJg/B
 NsVsl98wvZ/zjHJ/Sb2qt5ruet7PiQUGhkshB42jVHsaWfRM030sKGYxQ8pX5Tsa
 cSagnfBCvAZ9VjDLkXrnEbWRNTz8LSwn9l63z0jmtm5D8+vbpMtgvtWARtuZ4RNn
 D8wIWoyT0ytVZnosu5441TEgCejtcKOEFzThvKDYMeMJZ/rqVkAbcznapoC2qUd4
 fceNlLfQFvW7xpY1MY8mhlwC0ki4hM9MSDieaXUyefvAU/hoSp8MveVUH5UspYfb
 0FpkEhzklW/g0/fuq0DJQIrMn7dajjUvVZIUQtiVQuFHOr6RUbFG5vmXuCbAyx10
 PE2K4rnKz+PC8bKab7v9
 =YIsn
 -----END PGP SIGNATURE-----

Merge tag 'cleanup2' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc

Pull "ARM: cleanups of io includes" from Olof Johansson:
 "Rob Herring has done a sweeping change cleaning up all of the
  mach/io.h includes, moving some of the oft-repeated macros to a common
  location and removing a bunch of boiler plate.  This is another step
  closer to a common zImage for multiple platforms."

Fix up various fairly trivial conflicts (<mach/io.h> removal vs changes
around it, tegra localtimer.o is *still* gone, yadda-yadda).

* tag 'cleanup2' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (29 commits)
  ARM: tegra: Include assembler.h in sleep.S to fix build break
  ARM: pxa: use common IOMEM definition
  ARM: dma-mapping: convert ARCH_HAS_DMA_SET_COHERENT_MASK to kconfig symbol
  ARM: __io abuse cleanup
  ARM: create a common IOMEM definition
  ARM: iop13xx: fix missing declaration of iop13xx_init_early
  ARM: fix ioremap/iounmap for !CONFIG_MMU
  ARM: kill off __mem_pci
  ARM: remove bunch of now unused mach/io.h files
  ARM: make mach/io.h include optional
  ARM: clps711x: remove unneeded include of mach/io.h
  ARM: dove: add explicit include of dove.h to addr-map.c
  ARM: at91: add explicit include of hardware.h to uncompressor
  ARM: ep93xx: clean-up mach/io.h
  ARM: tegra: clean-up mach/io.h
  ARM: orion5x: clean-up mach/io.h
  ARM: davinci: remove unneeded mach/io.h include
  [media] davinci: remove includes of mach/io.h
  ARM: OMAP: Remove remaining includes for mach/io.h
  ARM: msm: clean-up mach/io.h
  ...
2012-03-29 18:02:10 -07:00
Rob Herring
6f6f6a7029 ARM: create a common IOMEM definition
Several platforms create IOMEM defines for casting to 'void __iomem *',
and other platforms are incorrectly using __io() macro for the same
purpose. This creates a common definition and removes all the platform
specific versions. Rather than try to make linux/io.h and asm/io.h
assembly safe, the assembly version of IOMEM is moved into
asm/assembler.h.

Signed-off-by: Rob Herring <rob.herring@calxeda.com>
Cc: Russell King <linux@arm.linux.org.uk>
Cc: Sekhar Nori <nsekhar@ti.com>
Cc: Kevin Hilman <khilman@ti.com>
Acked-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Cc: Ryan Mallon <rmallon@gmail.com>
Cc: Eric Miao <eric.y.miao@gmail.com>
Cc: Haojian Zhuang <haojian.zhuang@marvell.com>
Acked-by: David Brown <davidb@codeaurora.org>
Cc: Daniel Walker <dwalker@fifo99.com>
Cc: Bryan Huntsman <bryanh@codeaurora.org>
Cc: Sascha Hauer <kernel@pengutronix.de>
Cc: Shawn Guo <shawn.guo@linaro.org>
Acked-by: Tony Lindgren <tony@atomide.com>
Acked-by: Paul Walmsley <paul@pwsan.com>
Acked-by: Viresh Kumar <viresh.kumar@st.com>
Cc: Rajeev Kumar <rajeev-dlh.kumar@st.com>
Cc: Colin Cross <ccross@android.com>
Cc: Olof Johansson <olof@lixom.net>
Cc: Stephen Warren <swarren@nvidia.com>
Acked-by: Linus Walleij <linus.walleij@linaro.org>
Acked-by: Arnd Bergmann <arnd@arndb.de>
2012-03-13 21:22:09 -05:00