| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add support for LZ4 decompression in the Linux Kernel. LZ4 Decompression
APIs for kernel are based on LZ4 implementation by Yann Collet.
Benchmark Results(PATCH v3)
Compiler: Linaro ARM gcc 4.6.2
1. ARMv7, 1.5GHz based board
Kernel: linux 3.4
Uncompressed Kernel Size: 14MB
Compressed Size Decompression Speed
LZO 6.7MB 20.1MB/s, 25.2MB/s(UA)
LZ4 7.3MB 29.1MB/s, 45.6MB/s(UA)
2. ARMv7, 1.7GHz based board
Kernel: linux 3.7
Uncompressed Kernel Size: 14MB
Compressed Size Decompression Speed
LZO 6.0MB 34.1MB/s, 52.2MB/s(UA)
LZ4 6.5MB 86.7MB/s
- UA: Unaligned memory Access support
- Latest patch set for LZO applied
This patch set is for adding support for LZ4-compressed Kernel. LZ4 is a
very fast lossless compression algorithm and it also features an extremely
fast decoder [1].
But we have five of decompressors already and one question which does
arise, however, is that of where do we stop adding new ones? This issue
had been discussed and came to the conclusion [2].
Russell King said that we should have:
- one decompressor which is the fastest
- one decompressor for the highest compression ratio
- one popular decompressor (eg conventional gzip)
If we have a replacement one for one of these, then it should do exactly
that: replace it.
The benchmark shows that an 8% increase in image size vs a 66% increase
in decompression speed compared to LZO(which has been known as the
fastest decompressor in the Kernel). Therefore the "fast but may not be
small" compression title has clearly been taken by LZ4 [3].
[1] http://code.google.com/p/lz4/
[2] http://thread.gmane.org/gmane.linux.kbuild.devel/9157
[3] http://thread.gmane.org/gmane.linux.kbuild.devel/9347
LZ4 homepage: http://fastcompression.blogspot.com/p/lz4.html
LZ4 source repository: http://code.google.com/p/lz4/
Signed-off-by: Kyungsik Lee <kyungsik.lee@lge.com>
Signed-off-by: Yann Collet <yann.collet.73@gmail.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Florian Fainelli <florian@openwrt.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
lib: add support for LZ4-compressed kernel
Add support for extracting LZ4-compressed kernel images, as well as
LZ4-compressed ramdisk images in the kernel boot process.
Signed-off-by: Kyungsik Lee <kyungsik.lee@lge.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Russell King <rmk@arm.linux.org.uk>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Florian Fainelli <florian@openwrt.org>
Cc: Yann Collet <yann.collet.73@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
lib: add lz4 compressor module
This patchset is for supporting LZ4 compression and the crypto API using
it.
As shown below, the size of data is a little bit bigger but compressing
speed is faster under the enabled unaligned memory access. We can use
lz4 de/compression through crypto API as well. Also, It will be useful
for another potential user of lz4 compression.
lz4 Compression Benchmark:
Compiler: ARM gcc 4.6.4
ARMv7, 1 GHz based board
Kernel: linux 3.4
Uncompressed data Size: 101 MB
Compressed Size compression Speed
LZO 72.1MB 32.1MB/s, 33.0MB/s(UA)
LZ4 75.1MB 30.4MB/s, 35.9MB/s(UA)
LZ4HC 59.8MB 2.4MB/s, 2.5MB/s(UA)
- UA: Unaligned memory Access support
- Latest patch set for LZO applied
This patch:
Add support for LZ4 compression in the Linux Kernel. LZ4 Compression APIs
for kernel are based on LZ4 implementation by Yann Collet and were changed
for kernel coding style.
LZ4 homepage : http://fastcompression.blogspot.com/p/lz4.html
LZ4 source repository : http://code.google.com/p/lz4/
svn revision : r90
Two APIs are added:
lz4_compress() support basic lz4 compression whereas lz4hc_compress()
support high compression or CPU performance get lower but compression
ratio get higher. Also, we require the pre-allocated working memory with
the defined size and destination buffer must be allocated with the size of
lz4_compressbound.
[akpm@linux-foundation.org: make lz4_compresshcctx() static]
Signed-off-by: Chanho Min <chanho.min@lge.com>
Cc: "Darrick J. Wong" <djwong@us.ibm.com>
Cc: Bob Pearson <rpearson@systemfabricworks.com>
Cc: Richard Weinberger <richard@nod.at>
Cc: Herbert Xu <herbert@gondor.hengli.com.au>
Cc: Yann Collet <yann.collet.73@gmail.com>
Cc: Kyungsik Lee <kyungsik.lee@lge.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
lib/lz4: correct the LZ4 license
The LZ4 code is listed as using the "BSD 2-Clause License".
Signed-off-by: Richard Laager <rlaager@wiktel.com>
Acked-by: Kyungsik Lee <kyungsik.lee@lge.com>
Cc: Chanho Min <chanho.min@lge.com>
Cc: Richard Yao <ryao@gentoo.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
[ The 2-clause BSD can be just converted into GPL, but that's rude and
pointless, so don't do it - Linus ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
lz4: fix compression/decompression signedness mismatch
LZ4 compression and decompression functions require different in
signedness input/output parameters: unsigned char for compression and
signed char for decompression.
Change decompression API to require "(const) unsigned char *".
Signed-off-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Cc: Kyungsik Lee <kyungsik.lee@lge.com>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>
Cc: Yann Collet <yann.collet.73@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
lz4: ensure length does not wrap
Given some pathologically compressed data, lz4 could possibly decide to
wrap a few internal variables, causing unknown things to happen. Catch
this before the wrapping happens and abort the decompression.
Reported-by: "Don A. Bailey" <donb@securitymouse.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
lz4: fix another possible overrun
There is one other possible overrun in the lz4 code as implemented by
Linux at this point in time (which differs from the upstream lz4
codebase, but will get synced at in a future kernel release.) As
pointed out by Don, we also need to check the overflow in the data
itself.
While we are at it, replace the odd error return value with just a
"simple" -1 value as the return value is never used for anything other
than a basic "did this work or not" check.
Reported-by: "Don A. Bailey" <donb@securitymouse.com>
Reported-by: Willy Tarreau <w@1wt.eu>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
lz4: add overrun checks to lz4_uncompress_unknownoutputsize()
Jan points out that I forgot to make the needed fixes to the
lz4_uncompress_unknownoutputsize() function to mirror the changes done
in lz4_decompress() with regards to potential pointer overflows.
The only in-kernel user of this function is the zram code, which only
takes data from a valid compressed buffer that it made itself, so it's
not a big issue. But due to external kernel modules using this
function, it's better to be safe here.
Reported-by: Jan Beulich <JBeulich@suse.com>
Cc: "Don A. Bailey" <donb@securitymouse.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
LZ4 : fix the data abort issue
If the part of the compression data are corrupted, or the compression
data is totally fake, the memory access over the limit is possible.
This is the log from my system usning lz4 decompression.
[6502]data abort, halting
[6503]r0 0x00000000 r1 0x00000000 r2 0xdcea0ffc r3 0xdcea0ffc
[6509]r4 0xb9ab0bfd r5 0xdcea0ffc r6 0xdcea0ff8 r7 0xdce80000
[6515]r8 0x00000000 r9 0x00000000 r10 0x00000000 r11 0xb9a98000
[6522]r12 0xdcea1000 usp 0x00000000 ulr 0x00000000 pc 0x820149bc
[6528]spsr 0x400001f3
and the memory addresses of some variables at the moment are
ref:0xdcea0ffc, op:0xdcea0ffc, oend:0xdcea1000
As you can see, COPYLENGH is 8bytes, so @ref and @op can access the momory
over @oend.
Change-Id: I9919c9bcfca9ae9e26d83fa49afff99e74295d3b
Signed-off-by: JeHyeon Yeon <tom.yeon@windriver.com>
Reviewed-by: David Sterba <dsterba@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
| |\
| |
| |
| |
| |
| | |
into cm-11.0
Change-Id: I1e2c00ea82da0dfcfe1670a4c06f2dc182bec803
|
| | |\
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
* stable/linux-3.4.y: (811 commits)
Linux 3.4.67
mm: do not grow the stack vma just because of an overrun on preceding vma
mm/mmap: check for RLIMIT_AS before unmapping
drm/radeon: fix hw contexts for SUMO2 asics
watchdog: ts72xx_wdt: locking bug in ioctl
parisc: fix interruption handler to respect pagefault_disable()
KVM: PPC: Book3S HV: Fix typo in saving DSCR
ext4: fix memory leak in xattr
vfs: allow O_PATH file descriptors for fstatfs()
random: run random_int_secret_init() run after all late_initcalls
ALSA: hda - Add fixup for ASUS N56VZ
ALSA: snd-usb-usx2y: remove bogus frame checks
Linux 3.4.66
ext4: avoid hang when mounting non-journal filesystems with orphan list
Btrfs: change how we queue blocks for backref checking
tile: use a more conservative __my_cpu_offset in CONFIG_PREEMPT
ACPI / IPMI: Fix atomic context requirement of ipmi_msg_handler()
mm, show_mem: suppress page counts in non-blockable contexts
staging: comedi: ni_65xx: (bug fix) confine insn_bits to one subdevice
dmaengine: imx-dma: fix slow path issue in prep_dma_cyclic
...
Conflicts:
kernel/futex.c
Change-Id: Iea4c0bcac37b64b6376f5fbc79a23100484ae402
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
commit 4b59e6c4730978679b414a8da61514a2518da512 upstream.
On large systems with a lot of memory, walking all RAM to determine page
types may take a half second or even more.
In non-blockable contexts, the page allocator will emit a page allocation
failure warning unless __GFP_NOWARN is specified. In such contexts, irqs
are typically disabled and such a lengthy delay may even result in NMI
watchdog timeouts.
To fix this, suppress the page walk in such contexts when printing the
page allocation failure warning.
Signed-off-by: David Rientjes <rientjes@google.com>
Cc: Mel Gorman <mgorman@suse.de>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: Dave Hansen <dave@linux.vnet.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Xishi Qiu <qiuxishi@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
commit ac5a2962b02f57dea76d314ef2521a2170b28ab6 upstream.
There is a race between klist_remove and klist_release. klist_remove
uses a local var waiter saved on stack. When klist_release calls
wake_up_process(waiter->process) to wake up the waiter, waiter might run
immediately and reuse the stack. Then, klist_release calls
list_del(&waiter->list) to change previous
wait data and cause prior waiter thread corrupt.
The patch fixes it against kernel 3.9.
Signed-off-by: wang, biao <biao.wang@intel.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
commit a49b7e82cab0f9b41f483359be83f44fbb6b4979 upstream.
Anatol Pomozov identified a race condition that hits module unloading
and re-loading. To quote Anatol:
"This is a race codition that exists between kset_find_obj() and
kobject_put(). kset_find_obj() might return kobject that has refcount
equal to 0 if this kobject is freeing by kobject_put() in other
thread.
Here is timeline for the crash in case if kset_find_obj() searches for
an object tht nobody holds and other thread is doing kobject_put() on
the same kobject:
THREAD A (calls kset_find_obj()) THREAD B (calls kobject_put())
splin_lock()
atomic_dec_return(kobj->kref), counter gets zero here
... starts kobject cleanup ....
spin_lock() // WAIT thread A in kobj_kset_leave()
iterate over kset->list
atomic_inc(kobj->kref) (counter becomes 1)
spin_unlock()
spin_lock() // taken
// it does not know that thread A increased counter so it
remove obj from list
spin_unlock()
vfree(module) // frees module object with containing kobj
// kobj points to freed memory area!!
kobject_put(kobj) // OOPS!!!!
The race above happens because module.c tries to use kset_find_obj()
when somebody unloads module. The module.c code was introduced in
commit 6494a93d55fa"
Anatol supplied a patch specific for module.c that worked around the
problem by simply not using kset_find_obj() at all, but rather than make
a local band-aid, this just fixes kset_find_obj() to be thread-safe
using the proper model of refusing the get a new reference if the
refcount has already dropped to zero.
See examples of this proper refcount handling not only in the kref
documentation, but in various other equivalent uses of this pattern by
grepping for atomic_inc_not_zero().
[ Side note: the module race does indicate that module loading and
unloading is not properly serialized wrt sysfs information using the
module mutex. That may require further thought, but this is the
correct fix at the kobject layer regardless. ]
Reported-analyzed-and-tested-by: Anatol Pomozov <anatol.pomozov@gmail.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
| | | |
| | |
| | |
| | | |
Signed-off-by: Markus F.X.J. Oberhumer <markus@oberhumer.com>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This commit updates the kernel LZO code to the current upsteam version
which features a significant speed improvement - benchmarking the Calgary
and Silesia test corpora typically shows a doubled performance in
both compression and decompression on modern i386/x86_64/powerpc machines.
Signed-off-by: Markus F.X.J. Oberhumer <markus@oberhumer.com>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Rename the source file to match the function name and thereby
also make room for a possible future even slightly faster
"non-safe" decompressor version.
Signed-off-by: Markus F.X.J. Oberhumer <markus@oberhumer.com>
|
| | |\|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This is the 3.4.35 stable release
Conflicts:
drivers/base/power/main.c
drivers/net/tun.c
kernel/cgroup.c
kernel/power/suspend.c
Signed-off-by: Colin Cross <ccross@android.com>
Change-Id: I1f673ffa338439c6f6f227d67ffff24487e0cedf
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
commit 6cdae7416a1c45c2ce105a78187d9b7e8feb9e24 upstream.
The iteration logic of idr_get_next() is borrowed mostly verbatim from
idr_for_each(). It walks down the tree looking for the slot matching
the current ID. If the matching slot is not found, the ID is
incremented by the distance of single slot at the given level and
repeats.
The implementation assumes that during the whole iteration id is aligned
to the layer boundaries of the level closest to the leaf, which is true
for all iterations starting from zero or an existing element and thus is
fine for idr_for_each().
However, idr_get_next() may be given any point and if the starting id
hits in the middle of a non-existent layer, increment to the next layer
will end up skipping the same offset into it. For example, an IDR with
IDs filled between [64, 127] would look like the following.
[ 0 64 ... ]
/----/ |
| |
NULL [ 64 ... 127 ]
If idr_get_next() is called with 63 as the starting point, it will try
to follow down the pointer from 0. As it is NULL, it will then try to
proceed to the next slot in the same level by adding the slot distance
at that level which is 64 - making the next try 127. It goes around the
loop and finds and returns 127 skipping [64, 126].
Note that this bug also triggers in idr_for_each_entry() loop which
deletes during iteration as deletions can make layers go away leaving
the iteration with unaligned ID into missing layers.
Fix it by ensuring proceeding to the next slot doesn't carry over the
unaligned offset - ie. use round_up(id + 1, slot_distance) instead of
id += slot_distance.
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: David Teigland <teigland@redhat.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
commit 7810cc1e7721220f1ed2a23ca95113d6434f6dcd upstream.
digsig_verify_rsa() does not free kmalloc'ed buffer returned by
mpi_get_buffer().
Signed-off-by: YOSHIFUJI Hideaki <yoshfuji@linux-ipv6.org>
Signed-off-by: Dmitry Kasatkin <dmitry.kasatkin@intel.com>
Signed-off-by: James Morris <james.l.morris@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
commit fcc16882ac4532aaa644bff444f0c5d6228ba71e upstream.
The atomic64 library uses a handful of static spin locks to implement
atomic 64-bit operations on architectures without support for atomic
64-bit instructions.
Unfortunately, the spinlocks are initialized in a pure initcall and that
is too late for the vfs namespace code which wants to use atomic64
operations before the initcall is run.
This became a problem as of commit 8823c079ba71: "vfs: Add setns support
for the mount namespace".
This leads to BUG messages such as:
BUG: spinlock bad magic on CPU#0, swapper/0/0
lock: atomic64_lock+0x240/0x400, .magic: 00000000, .owner: <none>/-1, .owner_cpu: 0
do_raw_spin_lock+0x158/0x198
_raw_spin_lock_irqsave+0x4c/0x58
atomic64_add_return+0x30/0x5c
alloc_mnt_ns.clone.14+0x44/0xac
create_mnt_ns+0xc/0x54
mnt_init+0x120/0x1d4
vfs_caches_init+0xe0/0x10c
start_kernel+0x29c/0x300
coming out early on during boot when spinlock debugging is enabled.
Fix this by initializing the spinlocks statically at compile time.
Reported-and-tested-by: Vaibhav Bedia <vaibhav.bedia@ti.com>
Tested-by: Tony Lindgren <tony@atomide.com>
Cc: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
commit a3cea9894157c20a5b1ec08b7e0b5f2019740c10 upstream.
Since 4.4 GCC on MIPS no longer recognizes the "h" constraint,
leading to this build failure:
CC lib/mpi/generic_mpih-mul1.o
lib/mpi/generic_mpih-mul1.c: In function 'mpihelp_mul_1':
lib/mpi/generic_mpih-mul1.c:50:3: error: impossible constraint in 'asm'
This patch updates MPI with the latest umul_ppm implementations for MIPS.
Signed-off-by: Manuel Lauss <manuel.lauss@gmail.com>
Cc: Linux-MIPS <linux-mips@linux-mips.org>
Cc: Dmitry Kasatkin <dmitry.kasatkin@intel.com>
Cc: James Morris <jmorris@namei.org>
Patchwork: https://patchwork.linux-mips.org/patch/4612/
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
Cc: Shuah Khan <shuah.khan@hp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
commit eedce141cd2dad8d0cefc5468ef41898949a7031 upstream.
The genalloc code uses the bitmap API from include/linux/bitmap.h and
lib/bitmap.c, which is based on long values. Both bitmap_set from
lib/bitmap.c and bitmap_set_ll, which is the lockless version from
genalloc.c, use BITMAP_LAST_WORD_MASK to set the first bits in a long in
the bitmap.
That one uses (1 << bits) - 1, 0b111, if you are setting the first three
bits. This means that the API counts from the least significant bits
(LSB from now on) to the MSB. The LSB in the first long is bit 0, then.
The same works for the lookup functions.
The genalloc code uses longs for the bitmap, as it should. In
include/linux/genalloc.h, struct gen_pool_chunk has unsigned long
bits[0] as its last member. When allocating the struct, genalloc should
reserve enough space for the bitmap. This should be a proper number of
longs that can fit the amount of bits in the bitmap.
However, genalloc allocates an integer number of bytes that fit the
amount of bits, but may not be an integer amount of longs. 9 bytes, for
example, could be allocated for 70 bits.
This is a problem in itself if the Least Significat Bit in a long is in
the byte with the largest address, which happens in Big Endian machines.
This means genalloc is not allocating the byte in which it will try to
set or check for a bit.
This may end up in memory corruption, where genalloc will try to set the
bits it has not allocated. In fact, genalloc may not set these bits
because it may find them already set, because they were not zeroed since
they were not allocated. And that's what causes a BUG when
gen_pool_destroy is called and check for any set bits.
What really happens is that genalloc uses kmalloc_node with __GFP_ZERO
on gen_pool_add_virt. With SLAB and SLUB, this means the whole slab
will be cleared, not only the requested bytes. Since struct
gen_pool_chunk has a size that is a multiple of 8, and slab sizes are
multiples of 8, we get lucky and allocate and clear the right amount of
bytes.
Hower, this is not the case with SLOB or with older code that did memset
after allocating instead of using __GFP_ZERO.
So, a simple module as this (running 3.6.0), will cause a crash when
rmmod'ed.
[root@phantom-lp2 foo]# cat foo.c
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/genalloc.h>
MODULE_LICENSE("GPL");
MODULE_VERSION("0.1");
static struct gen_pool *foo_pool;
static __init int foo_init(void)
{
int ret;
foo_pool = gen_pool_create(10, -1);
if (!foo_pool)
return -ENOMEM;
ret = gen_pool_add(foo_pool, 0xa0000000, 32 << 10, -1);
if (ret) {
gen_pool_destroy(foo_pool);
return ret;
}
return 0;
}
static __exit void foo_exit(void)
{
gen_pool_destroy(foo_pool);
}
module_init(foo_init);
module_exit(foo_exit);
[root@phantom-lp2 foo]# zcat /proc/config.gz | grep SLOB
CONFIG_SLOB=y
[root@phantom-lp2 foo]# insmod ./foo.ko
[root@phantom-lp2 foo]# rmmod foo
------------[ cut here ]------------
kernel BUG at lib/genalloc.c:243!
cpu 0x4: Vector: 700 (Program Check) at [c0000000bb0e7960]
pc: c0000000003cb50c: .gen_pool_destroy+0xac/0x110
lr: c0000000003cb4fc: .gen_pool_destroy+0x9c/0x110
sp: c0000000bb0e7be0
msr: 8000000000029032
current = 0xc0000000bb0e0000
paca = 0xc000000006d30e00 softe: 0 irq_happened: 0x01
pid = 13044, comm = rmmod
kernel BUG at lib/genalloc.c:243!
[c0000000bb0e7ca0] d000000004b00020 .foo_exit+0x20/0x38 [foo]
[c0000000bb0e7d20] c0000000000dff98 .SyS_delete_module+0x1a8/0x290
[c0000000bb0e7e30] c0000000000097d4 syscall_exit+0x0/0x94
--- Exception: c00 (System Call) at 000000800753d1a0
SP (fffd0b0e640) is in userspace
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Benjamin Gaignard <benjamin.gaignard@stericsson.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
commit e96875677fb2b7cb739c5d7769824dff7260d31d upstream.
Account for all properties when a and/or b are 0:
gcd(0, 0) = 0
gcd(a, 0) = a
gcd(0, b) = b
Fixes no known problems in current kernels.
Signed-off-by: Davidlohr Bueso <dave@gnu.org>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
commit bc01637a80f5b670bd70a0279d3f93fa8de1c96d upstream.
When pkcs_1_v1_5_decode_emsa() returns without error and hash sizes do
not match, hash comparision is not done and digsig_verify_rsa() returns
no error. This is a bug and this patch fixes it.
The bug was introduced in v3.3 by commit b35e286a640f ("lib/digsig:
pkcs_1_v1_5_decode_emsa cleanup").
Signed-off-by: Dmitry Kasatkin <dmitry.kasatkin@intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Emulate NMIs on systems where they are not available by using timer
interrupts on other cpus. Each cpu will use its softlockup hrtimer
to update a per-cpu timestamp and then check the timestamp on the
next cpu.
The emulation uses the softlockup threshold instead of the hardlockup
threshold because the implementation runs at the same resolution as
the softlockup detector.
This patch is useful on systems where the hardlockup detector is not
available due to a lack of NMIs, for example most ARM SoCs.
Without this patch any cpu stuck with interrupts disabled can
cause a hardware watchdog reset with no debugging information,
but with this patch the kernel can detect the lockup and panic,
which can result in useful debugging info.
Change-Id: I3e6de6da27183b2a0a73a390e394f95a945856d9
Signed-off-by: Colin Cross <ccross@android.com>
|
| | |\|
| | |
| | |
| | |
| | |
| | |
| | | |
Conflicts:
drivers/base/power/main.c
Change-Id: I0c7d106d2de75d0e40f167245ad4cc37e1556bb0
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
[ Upstream commit 914bec1011a25f65cdc94988a6f974bfb9a3c10d ]
dql->num_queued could change while processing dql_completed().
To provide consistent calculation, added an on stack variable.
Signed-off-by: Hiroaki SHIMODA <shimoda.hiroaki@gmail.com>
Cc: Tom Herbert <therbert@google.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Denys Fedoryshchenko <denys@visp.net.lb>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
[ Upstream commit 25426b794efdc70dde7fd3134dc56fac3e7d562d ]
When below pattern is observed,
TIME
dql_queued() dql_completed() |
a) initial state |
|
b) X bytes queued V
c) Y bytes queued
d) X bytes completed
e) Z bytes queued
f) Y bytes completed
a) dql->limit has already some value and there is no in-flight packet.
b) X bytes queued.
c) Y bytes queued and excess limit.
d) X bytes completed and dql->prev_ovlimit is set and also
dql->prev_num_queued is set Y.
e) Z bytes queued.
f) Y bytes completed. inprogress and prev_inprogress are true.
At f), according to the comment, all_prev_completed becomes
true and limit should be increased. But POSDIFF() ignores
(completed == dql->prev_num_queued) case, so limit is decreased.
Signed-off-by: Hiroaki SHIMODA <shimoda.hiroaki@gmail.com>
Cc: Tom Herbert <therbert@google.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Denys Fedoryshchenko <denys@visp.net.lb>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
[ Upstream commit 0cfd32b736ae0c36b42697584811042726c07cba ]
POSDIFF() fails to take into account integer overflow case.
Signed-off-by: Hiroaki SHIMODA <shimoda.hiroaki@gmail.com>
Cc: Tom Herbert <therbert@google.com>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Denys Fedoryshchenko <denys@visp.net.lb>
Acked-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
commit cbf8ae32f66a9ceb8907ad9e16663c2a29e48990 upstream.
The memory the parameter __key points to is used as an iterator in
btree_get_prev(), so if we save off a bkey() pointer in retry_key and
then assign that to __key, we'll end up corrupting the btree internals
when we do eg
longcpy(__key, bkey(geo, node, i), geo->keylen);
to return the key value. What we should do instead is use longcpy() to
copy the key value that retry_key points to __key.
This can cause a btree to get corrupted by seemingly read-only
operations such as btree_for_each_safe.
[akpm@linux-foundation.org: avoid the double longcpy()]
Signed-off-by: Roland Dreier <roland@purestorage.com>
Acked-by: Joern Engel <joern@logfs.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
commit fffaee365fded09f9ebf2db19066065fa54323c3 upstream.
This patch fixes bug in macro radix_tree_for_each_contig().
If radix_tree_next_slot() sees NULL in next slot it returns NULL, but following
radix_tree_next_chunk() switches iterating into next chunk. As result iterating
becomes non-contiguous and breaks vfs "splice" and all its users.
Signed-off-by: Konstantin Khlebnikov <khlebnikov@openvz.org>
Reported-and-bisected-by: Hans de Bruin <jmdebruin@xmsnet.nl>
Reported-and-bisected-by: Ondrej Zary <linux@rainbow-software.org>
Reported-bisected-and-tested-by: Toralf Förster <toralf.foerster@gmx.de>
Link: https://lkml.org/lkml/2012/6/5/64
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This patch implements the word-at-a-time interface for ARM using the
same algorithm as x86. We use the fls macro from ARMv5 onwards, where
we have a clz instruction available which saves us a mov instruction
when targetting Thumb-2. For older CPUs, we use the magic 0x0ff0001
constant. Big-endian configurations make use of the implementation from
asm-generic.
With this implemented, we can replace our byte-at-a-time strnlen_user
and strncpy_from_user functions with the optimised generic versions.
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
lib: Sparc's strncpy_from_user is generic enough, move under lib/
To use this, an architecture simply needs to:
1) Provide a user_addr_max() implementation via asm/uaccess.h
2) Add "select GENERIC_STRNCPY_FROM_USER" to their arch Kcnfig
3) Remove the existing strncpy_from_user() implementation and symbol
exports their architecture had.
Signed-off-by: David S. Miller <davem@davemloft.net>
Acked-by: David Howells <dhowells@redhat.com>
lib: add generic strnlen_user() function
This adds a new generic optimized strnlen_user() function that uses the
<asm/word-at-a-time.h> infrastructure to portably do efficient string
handling.
In many ways, strnlen is much simpler than strncpy, and in particular we
can always pre-align the words we load from memory. That means that all
the worries about alignment etc are a non-issue, so this one can easily
be used on any architecture. You obviously do have to do the
appropriate word-at-a-time.h macros.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
kernel: Move REPEAT_BYTE definition into linux/kernel.h
And make sure that everything using it explicitly includes
that header file.
Signed-off-by: David S. Miller <davem@davemloft.net>
word-at-a-time: make the interfaces truly generic
This changes the interfaces in <asm/word-at-a-time.h> to be a bit more
complicated, but a lot more generic.
In particular, it allows us to really do the operations efficiently on
both little-endian and big-endian machines, pretty much regardless of
machine details. For example, if you can rely on a fast population
count instruction on your architecture, this will allow you to make your
optimized <asm/word-at-a-time.h> file with that.
NOTE! The "generic" version in include/asm-generic/word-at-a-time.h is
not truly generic, it actually only works on big-endian. Why? Because
on little-endian the generic algorithms are wasteful, since you can
inevitably do better. The x86 implementation is an example of that.
(The only truly non-generic part of the asm-generic implementation is
the "find_zero()" function, and you could make a little-endian version
of it. And if the Kbuild infrastructure allowed us to pick a particular
header file, that would be lovely)
The <asm/word-at-a-time.h> functions are as follows:
- WORD_AT_A_TIME_CONSTANTS: specific constants that the algorithm
uses.
- has_zero(): take a word, and determine if it has a zero byte in it.
It gets the word, the pointer to the constant pool, and a pointer to
an intermediate "data" field it can set.
This is the "quick-and-dirty" zero tester: it's what is run inside
the hot loops.
- "prep_zero_mask()": take the word, the data that has_zero() produced,
and the constant pool, and generate an *exact* mask of which byte had
the first zero. This is run directly *outside* the loop, and allows
the "has_zero()" function to answer the "is there a zero byte"
question without necessarily getting exactly *which* byte is the
first one to contain a zero.
If you do multiple byte lookups concurrently (eg "hash_name()", which
looks for both NUL and '/' bytes), after you've done the prep_zero_mask()
phase, the result of those can be or'ed together to get the "either
or" case.
- The result from "prep_zero_mask()" can then be fed into "find_zero()"
(to find the byte offset of the first byte that was zero) or into
"zero_bytemask()" (to find the bytemask of the bytes preceding the
zero byte).
The existence of zero_bytemask() is optional, and is not necessary
for the normal string routines. But dentry name hashing needs it, so
if you enable DENTRY_WORD_AT_A_TIME you need to expose it.
This changes the generic strncpy_from_user() function and the dentry
hashing functions to use these modified word-at-a-time interfaces. This
gets us back to the optimized state of the x86 strncpy that we lost in
the previous commit when moving over to the generic version.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
ARM: 7450/1: dcache: select DCACHE_WORD_ACCESS for little-endian ARMv6+ CPUs
DCACHE_WORD_ACCESS uses the word-at-a-time API for optimised string
comparisons in the vfs layer.
This patch implements support for load_unaligned_zeropad for ARM CPUs
with native support for unaligned memory accesses (v6+) when running
little-endian.
Change-Id: Ifdf8207f2581f93870eb0e627f5d12f97c4be7cc
Reviewed-by: Nicolas Pitre <nico@linaro.org>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This ignores %n in printf again, as was originally documented.
Implementing %n poses a greater security risk than utility, so it should
stay ignored. To help anyone attempting to use %n, a warning will be
emitted if it is encountered.
Based on an earlier patch by Joe Perches.
Because %n was designed to write to pointers on the stack, it has been
frequently used as an attack vector when bugs are found that leak
user-controlled strings into functions that ultimately process format
strings. While this class of bug can still be turned into an
information leak, removing %n eliminates the common method of elevating
such a bug into an arbitrary kernel memory writing primitive,
significantly reducing the danger of this class of bug.
For seq_file users that need to know the length of a written string for
padding, please see seq_setwidth() and seq_pad() instead.
Signed-off-by: Kees Cook <keescook@chromium.org>
Cc: Joe Perches <joe@perches.com>
Cc: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: David Miller <davem@davemloft.net>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Git-commit: 9196436ab2f713b823a2ba2024cb69f40b2f54a5
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
CRs-fixed: 665291
Change-Id: Id191acaf66e3c395df92b0a77331269b525e3cad
Signed-off-by: David Brown <davidb@codeaurora.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Adding config string allows a config item to be enabled either through
defconfig or through menuconfig.
Change-Id: I0d1ca3f1fc4c9ce45c433292af8ffbe3482a450b
Signed-off-by: Karthikeyan Ramasubramanian <kramasub@codeaurora.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In some cases, free_irq_cpu_rmap() is called while holding a lock (eg
rtnl). This can lead to deadlocks, because it invokes
flush_scheduled_work() which ends up waiting for whole system workqueue
to flush, but some pending works might try to acquire the lock we are
already holding.
This commit uses reference-counting to replace
irq_run_affinity_notifiers(). It also removes
irq_run_affinity_notifiers() altogether.
[akpm@linux-foundation.org: eliminate free_cpu_rmap, rename cpu_rmap_reclaim() to cpu_rmap_release(), propagate kref_put() retval from cpu_rmap_put()]
Signed-off-by: David Decotigny <decot@googlers.com>
Reviewed-by: Ben Hutchings <bhutchings@solarflare.com>
Acked-by: Eric Dumazet <edumazet@google.com>
Reviewed-by: Josh Triplett <josh@joshtriplett.org>
Cc: "David S. Miller" <davem@davemloft.net>
Cc: Or Gerlitz <ogerlitz@mellanox.com>
Acked-by: Amir Vadai <amirv@mellanox.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The genalloc code uses the bitmap API from include/linux/bitmap.h and
lib/bitmap.c, which is based on long values. Both bitmap_set from
lib/bitmap.c and bitmap_set_ll, which is the lockless version from
genalloc.c, use BITMAP_LAST_WORD_MASK to set the first bits in a long in
the bitmap.
That one uses (1 << bits) - 1, 0b111, if you are setting the first three
bits. This means that the API counts from the least significant bits
(LSB from now on) to the MSB. The LSB in the first long is bit 0, then.
The same works for the lookup functions.
The genalloc code uses longs for the bitmap, as it should. In
include/linux/genalloc.h, struct gen_pool_chunk has unsigned long
bits[0] as its last member. When allocating the struct, genalloc should
reserve enough space for the bitmap. This should be a proper number of
longs that can fit the amount of bits in the bitmap.
However, genalloc allocates an integer number of bytes that fit the
amount of bits, but may not be an integer amount of longs. 9 bytes, for
example, could be allocated for 70 bits.
This is a problem in itself if the Least Significat Bit in a long is in
the byte with the largest address, which happens in Big Endian machines.
This means genalloc is not allocating the byte in which it will try to
set or check for a bit.
This may end up in memory corruption, where genalloc will try to set the
bits it has not allocated. In fact, genalloc may not set these bits
because it may find them already set, because they were not zeroed since
they were not allocated. And that's what causes a BUG when
gen_pool_destroy is called and check for any set bits.
What really happens is that genalloc uses kmalloc_node with __GFP_ZERO
on gen_pool_add_virt. With SLAB and SLUB, this means the whole slab
will be cleared, not only the requested bytes. Since struct
gen_pool_chunk has a size that is a multiple of 8, and slab sizes are
multiples of 8, we get lucky and allocate and clear the right amount of
bytes.
Hower, this is not the case with SLOB or with older code that did memset
after allocating instead of using __GFP_ZERO.
So, a simple module as this (running 3.6.0), will cause a crash when
rmmod'ed.
[root@phantom-lp2 foo]# cat foo.c
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/genalloc.h>
MODULE_LICENSE("GPL");
MODULE_VERSION("0.1");
static struct gen_pool *foo_pool;
static __init int foo_init(void)
{
int ret;
foo_pool = gen_pool_create(10, -1);
if (!foo_pool)
return -ENOMEM;
ret = gen_pool_add(foo_pool, 0xa0000000, 32 << 10, -1);
if (ret) {
gen_pool_destroy(foo_pool);
return ret;
}
return 0;
}
static __exit void foo_exit(void)
{
gen_pool_destroy(foo_pool);
}
module_init(foo_init);
module_exit(foo_exit);
[root@phantom-lp2 foo]# zcat /proc/config.gz | grep SLOB
CONFIG_SLOB=y
[root@phantom-lp2 foo]# insmod ./foo.ko
[root@phantom-lp2 foo]# rmmod foo
------------[ cut here ]------------
kernel BUG at lib/genalloc.c:243!
cpu 0x4: Vector: 700 (Program Check) at [c0000000bb0e7960]
pc: c0000000003cb50c: .gen_pool_destroy+0xac/0x110
lr: c0000000003cb4fc: .gen_pool_destroy+0x9c/0x110
sp: c0000000bb0e7be0
msr: 8000000000029032
current = 0xc0000000bb0e0000
paca = 0xc000000006d30e00 softe: 0 irq_happened: 0x01
pid = 13044, comm = rmmod
kernel BUG at lib/genalloc.c:243!
[c0000000bb0e7ca0] d000000004b00020 .foo_exit+0x20/0x38 [foo]
[c0000000bb0e7d20] c0000000000dff98 .SyS_delete_module+0x1a8/0x290
[c0000000bb0e7e30] c0000000000097d4 syscall_exit+0x0/0x94
--- Exception: c00 (System Call) at 000000800753d1a0
SP (fffd0b0e640) is in userspace
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@linux.vnet.ibm.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Benjamin Gaignard <benjamin.gaignard@stericsson.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Git-commit: eedce141cd2dad8d0cefc5468ef41898949a7031
Git-repo: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git
Change-Id: Ic8a8a2d05ee16d59af61d0cc79c88e6e7ab442ae
Signed-off-by: Sunil Khatri <sunilkh@codeaurora.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
In existing code we calculate nbytes based on the byte
boundary, but genalloc uses bitmap for maintaining the
memory allocation aligned to long. So while calculating
nbytes we end up getting wrong nbytes.
example: lets say nbytes comes to 9 bytes for 70 bits when
bytes aligned,but if long aligned we will have 3 long words
i.e 12 bytes. This difference may lead to choosing the
wrong api for freeing the memory i.e Between kfree() and
vfree().
Fix was inspired by an upstream commit
eedce141cd2dad8d0cefc5468ef41898949a7031, bringing same fix
into the gen_pool_detroy path.
Change-Id: I942caf59e25515c780896b328b912604df9e10bf
Signed-off-by: Hareesh Gundu <hareeshg@codeaurora.org>
Signed-off-by: Sunil Khatri <sunilkh@codeaurora.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The CP heap has been replaced by CMA-based heaps. Remove it.
Change-Id: I6b5e81d701e730584a456c7d9c3ff71397a12c0f
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
msm: Remove references to SMI
SMI memory is no longer supported on current targets. Remove
all references to it.
Change-Id: Ifbdf6247cc4b13c1d0173f1c998e07c702c06408
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
gpu: ion: get rid of adjacent heap code
The targets that require adjacent heaps are no longer present. Get
rid of the complicated crufty code.
Change-Id: Ic46cd16d36b812b19a7dc88202264cf137aebdd1
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
gpu: ion: Remove mach/ion.h
This file is now mostly useless. Move the remaining defintions elsewhere
and cleanup the cruft.
Change-Id: Icc19f9138a0d9d24a466511d69bc0eed45789fb9
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
gpu: ion: Remove allocate_contiguous_ebi call for carveouts
The allocate_contiguous_ebi APIs are being deprecated in favor
of the standard dma_alloc_coherent APIs. The carveout heap
does not currently support the dma_alloc_coherent APIs so just
give a note that clients should switch over to a heap type that
does support dma_alloc_coherent for non-fixed addresses. Remove
some crufty stuff related to allocate_contiguous_ebi memory types
as well.
Change-Id: I96fd43eb59d4cde4cdfd5c8d414885467ff423db
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
x9007: Remove references to SMI memory
Change-Id: I1b22e2d09c830be47e8100f1fbe9cf084fd0be31
gpu: ion: change pr_err to pr_debug for secure buffer failure
New architecture requirements for secure buffers mean that a
secure buffer failure may happen during normal playback. There's
no need to print an error message every time during normal use
cases so change the pr_err to pr_debug.
Change-Id: Ic05d0e95e2542dec9661e990cbfefd50274abc9e
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
gpu: ion: Pull secure buffers out of Ion
Securing individual buffers does not belong in the Ion framework.
Pull the code out of the framework and into a file which can be
managed separately. The file can be moved elsewhere at a later
date.
Change-Id: Ifaec740abcd78ee1bc3769a6c73014e4309c82be
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
gpu: ion: Skip wrong usage check for secure buffers
With the change to secure buffers at allocation time, it is
no longer possible to correctly validate if the memory is
re-secured with a different usage hint. Considering the usage
hint is being deprecated anyway, drop the redundant check. If
a client tries to re-secure with the wrong usage hint, failures
will occur elsewhere to indicate the problem.
Change-Id: If8487c431eb4555927cd06c6cc5edcb1a66002f1
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
gpu: ion: only secure buffers when supported
There are some targets where securing Ion buffers is not supported
through the TZ API used in the MSM secure buffer code. Currently, we
simply try the allocation and if it fails we let the clients secure it
themselves through some other means. However, due to a recent change to
the TZ error logging in the kernel, the current "ask for forgiveness"
approach results in logspam. Fix this by moving to an "ask for
permission" approach: if the TZ API to secure the buffers is not
available, don't attempt to use it.
Change-Id: Ie003d5139a0622ce9c18719ca2697525e23e7733
Signed-off-by: Mitchel Humpherys <mitchelh@codeaurora.org>
ion: fix print statements
Use %zx, %zu for printing size_t and %p for printing
pointers (rather than casting them as integers and using %x).
Change-Id: I86bc2b10283b640f51a861adde6b8181bdbbcb29
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
msm: Remove lib/memory_alloc.c and related files
The memory pools are gone for good. Remove memory_alloc.c and the
associated header file from clients.
Change-Id: I8d303c72fa03cdca7eee34b39d57d5cbb5df920d
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
msm: Remove memory reservation code
The memory reservation API used for allocate_contiguous_ebi is now
deprecated in favor of using standard kernel APIs such as
dma_alloc_coherent and CMA for reserving/allocating memory. Remove
lots of code.
Change-Id: I5a7c22cce6b4b772dc67d18029e40b4509d3ba40
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
msm: Remove ugly header include fixme
Long ago, some targets needed ugly hacks to work around buggy
hardware. These targets have fortunately been deprecated.
Remove the ugly header include remnants.
Change-Id: Ia7eef2f8f625775e40c1a73f5c2095560618cada
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
msm: Remove store_ttbr0
This was used for debugging a while back but isn't even called
anymore. Remove it.
Change-Id: Ia5e9c609508a80ae471b9ff448b92fe193e2d2b1
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
msm: clean up memory.c
This file contained support and workarounds for platforms that are
no longer supported on 3.10 -- remove this code.
Change-Id: I841a04c34c5fbe36a40b5ee49e3ac8608c963808
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
msm: clean up memory.h
Remove various defines and comments which were only
relevant to platforms or features not supported any more.
Change-Id: I3eade953577079878375c87e35c842090c35f55d
Signed-off-by: Larry Bassel <lbassel@codeaurora.org>
msm: remove allocate_contiguous_ebi APIs
The allocate_contiguous_ebi APIs are now deprecated. Remove them.
Change-Id: Icb83230e5b4b2eca135e8a27f6d5e73a0b9b3bb9
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
msm: Remove DONT_MAP_HOLE_AFTER_MEMBANK0
The DONT_MAP_HOLE_AFTER_MEMBANK0 feature is now deprecated. Remove
the code associated with it.
Change-Id: I729b9574a7655e97c8e7abc68797b874e90dcac2
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
ASoC: msm: qdsp6v2: Migrate away from allocate_contiguous_ebi APIs
The allocate_contiguous_ebi APIs are being deprecated in favor of
the standard dma_alloc_coherent APIs. Migrate accordingly.
Change-Id: I01dece36f7aaae192afa689daf0b51de4c1ec592
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
msm: kgsl: Remove reference to memory_alloc.h
Change-Id: I11a86f6f3254c8f150b7287d729540ef5db71ea1
msm: rtb: Convert away from allocate_contiguous_ebi APIs
The allocate_contiguous_ebi APIs are now deprecated. Move
to using dma_alloc_coherent for the RTB buffer. Because we
aren't using allocate_contiguous_ebi for allocation, change
the devicetree bindings as well to support using CMA or just
a regular size from the generic region.
Change-Id: I8ab0b7336924a78b40c68af3f215d7bfd2fba33c
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
msm: pm-boot: Remove boot remapping support
All devices that used the boot remapping code are no longer supported.
Remove the code that sets this up.
Change-Id: I85eb4e9f01664c855b0f96c34daca4c2a16ce896
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
msm: pm-boot: clean up non TZ support code
Modern ARM processors supporting TZ environment by
default. Currently, cores warmboot starts from TZ when
there is a TZ support. This patch removes the non TZ
supported code.
Change-Id: Ifa5df40bd649e4860a2660132f3fc30fa7937358
Signed-off-by: Murali Nalajala <mnalajal@codeaurora.org>
msm: pm-boot: flush msm_pm_boot_vector to main memory
msm_pm_boot_vector array stores cores warmboot entry before
cores enter into LPM mode. These warm boot entries must be
visible to core when it comes out of LPM mode. To ensure
this variable to appear in main memory use flush APIs to
flush out of caches. Use a different API to do this on
latest ARM based targets.
Change-Id: I870b68e4f6f6b116e93deffe59edff9094db0fa8
Signed-off-by: Murali Nalajala <mnalajal@codeaurora.org>
msm: buspm: Remove allocate_contiguous_ebi calls
allocate_contiguous_ebi is being deprecated in favor of the standard
dma_alloc_coherent APIs. Move accordingly.
Change-Id: I3746e5ac047b1a6ee570aaf96ffbe305e3b6a49f
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
msm: kgsl: Add CMA memory feature to support NOMMU for GPU
Reserves CMA memory for kgsl driver early during bootup and then
uses dma_alloc_coherent() to allocate physically contiguous memory
instead of using the MMU
Change-Id: Ica9b244fe9b9d8a902d670293a0bec2edf81bd5d
Signed-off-by: Shrenuj Bansal <shrenujb@codeaurora.org>
media: Remove videobuf2-msm-mem.c
This file is no longer needed by any frameworks. Remove it.
Change-Id: I66f01ab9ba273ba15b1966ad040d1d3d172d33bf
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
I was reviewing code which I suspected might allocate a zero size SG
table. That will cause memory corruption. Also we can't return before
doing the memset or we could end up using uninitialized memory in the
cleanup path.
Change-Id: Icee6be8ea22644d7f16264d9d2a0887c7145996b
CRs-Fixed: 611562
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Cc: Akinobu Mita <akinobu.mita@gmail.com>
Cc: Imre Deak <imre.deak@intel.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Maxim Levitsky <maximlevitsky@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Increase spin dump timeout so that it is back in sync with the
upstream value.
Change-Id: I4b8896b69dadd66196090d86e40da89766b048e6
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Calls to sysrq_sched_debug_show() can yield rather verbose output
which contributes to log spew and, under heavy load, may increase
the chances of a watchdog bark.
Make printing of this data optional with the introduction of a
new Kconfig, CONFIG_SYSRQ_SCHED_DEBUG.
Change-Id: I5f54d901d0dea403109f7ac33b8881d967a899ed
Signed-off-by: Matt Wagantall <mattw@codeaurora.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The iteration logic of idr_get_next() is borrowed mostly verbatim from
idr_for_each(). It walks down the tree looking for the slot matching
the current ID. If the matching slot is not found, the ID is
incremented by the distance of single slot at the given level and
repeats.
The implementation assumes that during the whole iteration id is aligned
to the layer boundaries of the level closest to the leaf, which is true
for all iterations starting from zero or an existing element and thus is
fine for idr_for_each().
However, idr_get_next() may be given any point and if the starting id
hits in the middle of a non-existent layer, increment to the next layer
will end up skipping the same offset into it. For example, an IDR with
IDs filled between [64, 127] would look like the following.
[ 0 64 ... ]
/----/ |
| |
NULL [ 64 ... 127 ]
If idr_get_next() is called with 63 as the starting point, it will try
to follow down the pointer from 0. As it is NULL, it will then try to
proceed to the next slot in the same level by adding the slot distance
at that level which is 64 - making the next try 127. It goes around the
loop and finds and returns 127 skipping [64, 126].
Note that this bug also triggers in idr_for_each_entry() loop which
deletes during iteration as deletions can make layers go away leaving
the iteration with unaligned ID into missing layers.
Fix it by ensuring proceeding to the next slot doesn't carry over the
unaligned offset - ie. use round_up(id + 1, slot_distance) instead of
id += slot_distance.
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: David Teigland <teigland@redhat.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Change-Id: Ifb6d64747e7f08e2f5d7609bd66a48405f0c9b95
Signed-off-by: Prakash Kamliya <pkamliya@codeaurora.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
There is a race between klist_remove and klist_release. klist_remove
uses a local var waiter saved on stack. When klist_release calls
wake_up_process(waiter->process) to wake up the waiter, waiter might run
immediately and reuse the stack. Then, klist_release calls
list_del(&waiter->list) to change previous
wait data and cause prior waiter thread corrupt.
The patch fixes it against kernel 3.9.
Change-Id: I3c418c9ca5411a8ba934322172e33b93d6a480bb
Signed-off-by: wang, biao <biao.wang@intel.com>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Git-commit: ac5a2962b02f57dea76d314ef2521a2170b28ab6
Git-repo: git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
Signed-off-by: Matt Wagantall <mattw@codeaurora.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
On certain targets the worst case console buffer draining time can
exceed the aggressive spin dump timeout value. Relax the timeout value
to cater to these scenarios.
Change-Id: I7312ffe06bafd2dc7adf0f4e8b73a53fc5839235
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The devm_request_and_ioremap() function is very useful and helps avoid a
whole lot of boilerplate. However, one issue that keeps popping up is
its lack of a specific error code to determine which of the steps that
it performs failed. Furthermore, while the function gives an example and
suggests what error code to return on failure, a wide variety of error
codes are used throughout the tree.
In an attempt to fix these problems, this patch adds a new function that
drivers can transition to. The devm_ioremap_resource() returns a pointer
to the remapped I/O memory on success or an ERR_PTR() encoded error code
on failure. Callers can check for failure using IS_ERR() and determine
its cause by extracting the error code using PTR_ERR().
devm_request_and_ioremap() is implemented as a wrapper around the new
API and return NULL on failure as before. This ensures that backwards
compatibility is maintained until all users have been converted to the
new API, at which point the old devm_request_and_ioremap() function
should be removed.
A semantic patch is included which can be used to convert from the old
devm_request_and_ioremap() API to the new devm_ioremap_resource() API.
Some non-trivial cases may require manual intervention, though.
Change-Id: I458afe18768979122c12e95bffe979a31f02f607
Signed-off-by: Thierry Reding <thierry.reding@avionic-design.de>
Cc: Arnd Bergmann <arnd@arndb.de>
Acked-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Git-commit: 75096579c3ac39ddc2f8b0d9a8924eba31f4d920
Git-repo: https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git
Signed-off-by: Dolev Raviv <draviv@codeaurora.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Data corruptions in the kernel often end up in system crashes that
are easier to debug closer to the time of detection. Specifically,
if we do not panic immediately after lock or list corruptions have been
detected, the problem context is lost in the ensuing system mayhem.
Add support for allowing system crash immediately after such corruptions
are detected. The CONFIG option controls the enabling/disabling of the
feature.
Change-Id: I9b2eb62da506a13007acff63e85e9515145909ff
Signed-off-by: Syed Rameez Mustafa <rameezmustafa@codeaurora.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The current memory allocation code does not support physical
addresses above 32-bits, meaning that LPAE is not supported.
Change the types of the code internally to support 64-bit
physical addresses. Full 64-bit virtual addresses may not be
fully supported so this code will still need an update when
that happens.
Change-Id: I0c4a7b76784981ffcd7f4776d51e05f3592bb7b8
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
genalloc may be used to allocate physical addresses in addition
to virtual addresses. On LPAE based systems, physical addresses
are 64-bit which does not fit in an unsigned long. Change the
type used for allocation internally to be 64 bit to ensure that
genalloc can properly allocate physical addresses > 4GB.
Ideally this will just be a temporary workaround until either
a proper fix comes elsewhere or an alternative API is used to
manage physical memory.
Change-Id: Ib10b887730e0c6916de5d1e6f77e771c6cde14bb
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Add the %pa format specifier for printing a phys_addr_t
type and its derivative types (such as resource_size_t),
since the physical address size on some platforms can vary
based on build options, regardless of the native integer
type.
Change-Id: I2ba0003d689a9a2bd13f1a1e2d897b6eacc5d224
Signed-off-by: Stepan Moskovchenko <stepanm@codeaurora.org>
|
| | | |
| | |
| | |
| | |
| | | |
Change-Id: I9d411a9c00b37d0907066dbfa6db3a78c108646f
Signed-off-by: Duy Truong <dtruong@codeaurora.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The new API is added to verify the maximum length of a QMI message
embedded inside the message descriptor matches the expected maximum
length of the concerned QMI message. The expected maximum length of
the QMI message is calculated using the message descriptor which
describes all the elements in the structure.
Change-Id: I09602df410d9891d60f1502245ad055653f8f0f7
Signed-off-by: Karthikeyan Ramasubramanian <kramasub@codeaurora.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
If QMI clients pass an invalid maximum message length parameter, the
QMI message encode operation may perform an out of bound encode buffer
access.
Add a check in the QMI message encode operation to avoid out of bound
buffer access.
CRs-Fixed: 437329
Change-Id: I9d16078fade8160f327a8a77aea6c2a8e66b3127
Signed-off-by: Karthikeyan Ramasubramanian <kramasub@codeaurora.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
The logic in do_raw_spin_lock attempts to acquire a spinlock
by invoking arch_spin_trylock in a loop with a delay between
each attempt. Now consider the following situation in a 2
CPU system:
1. Cpu 0 continually acquires and releases a spinlock in a
tight loop; it stays in this loop until some condition X
is satisfied. X can only be satisfied by another CPU.
2. Cpu 1 tries to acquire the same spinlock, in an attempt
to satisfy the aforementioned condition X. However, it
never sees the unlocked value of the lock because the
debug spinlock code uses trylock instead of just lock;
it checks at all the wrong moments - whenever Cpu 0 has
locked the lock.
Now in the absence of debug spinlocks, the architecture specific
spinlock code can correctly allow Cpu1 to wait in a "queue"
(e.g., ticket spinlocks), ensuring that it acquires the lock at
some point. However, with the debug spinlock code, livelock
can easily occur due to the use of try_lock, which obviously
cannot put the CPU in that "queue".
Address this by actually attempting arch_spin_lock once it is
suspected that there is a spinlock lockup. If we're in a
situation that is described above, the arch_spin_lock should
succeed; otherwise other timeout mechanisms (e.g., watchdog)
should alert the system of a lockup.
Additionally, the delay between each trylock attempt adds up
to a total delay of 1 second. On some embedded architectures,
this is an unacceptably large value. Reduce this to a ~62 ms
delay.
CRs-Fixed: 393351
Change-Id: I56f0d8f5e483cd7e8167fc7a363b9de7777a0830
Signed-off-by: Vikram Mulukutla <markivx@codeaurora.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Add logging macros to QMI Encode/Decode Library to enable debugging.
Add a kernel config to enable/disable the logging.
Change-Id: I9d8b44b2e75f281067161618f2a02a660c4d6812
Signed-off-by: Karthikeyan Ramasubramanian <kramasub@codeaurora.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Introduce Encode/Decode Library to perform QMI message marshaling.
The QMI Encode/Decode Library encodes the kernel C data structures into
QMI wire format and decodes the messages in QMI wire format into kernel
C structures.
Change-Id: Ib443e697dafedeac8a790de9a3a8ed4a8444082f
Signed-off-by: Karthikeyan Ramasubramanian <kramasub@codeaurora.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Change the chunk allocation from kmalloc to vmalloc for
allocations greater than a page. This allows large
chunks to be allocated from physically non-contiguous
memory and increases the chance of the allocation
succeeding.
CRs-fixed: 387655
Change-Id: I75cd0d1634c6f8ff4d91e615122fdcfada00ec69
Signed-off-by: Shubhraprakash Das <sadas@codeaurora.org>
|
| | | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
bitmap_find_next_zero_area_off requires everything to be given in terms
of bits. genpool no longer keeps track of everything in terms of bits
so the start address must be shifted before passing to
bitmap_find_next_zero_area_off. Without this, if the start address
is unaligned, the code will not take this into account.
Change-Id: I49e3dba51c48e2f97b03b21304e1258e83881c6b
Signed-off-by: Laura Abbott <lauraa@codeaurora.org>
|