Age | Commit message (Collapse) | Author |
|
commit dc8a0843a435b2c0891e7eaea64faaf1ebec9b11 upstream
deflate_mutex protects the globals lzo_mem and lzo_compress_buf. However,
jffs2_lzo_compress() unlocks deflate_mutex _before_ it has copied out the
compressed data from lzo_compress_buf. Correct this by moving the mutex
unlock after the copy.
In addition, document what deflate_mutex actually protects.
Signed-off-by: Geert Uytterhoeven <Geert.Uytterhoeven@sonycom.com>
Acked-by: Richard Purdie <rpurdie@openedhand.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit a53a6c85756339f82ff19e001e90cfba2d6299a8 upstream
Adding a spare to a raid10 doesn't cause recovery to start.
This is due to an silly type in
commit 6c2fce2ef6b4821c21b5c42c7207cb9cf8c87eda
and so is a bug in 2.6.27 and .28-rc.
Thanks to Thomas Backlund for bisecting to find this.
Cc: Thomas Backlund <tmb@mandriva.org>
Cc: George Spelvin <linux@horizon.com>
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit f1cd14ae52985634d0389e934eba25b5ecf24565 upstream
Date: Thu, 6 Nov 2008 19:41:24 +1100
Subject: md: linear: Fix a division by zero bug for very small arrays.
We currently oops with a divide error on starting a linear software
raid array consisting of at least two very small (< 500K) devices.
The bug is caused by the calculation of the hash table size which
tries to compute sector_div(sz, base) with "base" being zero due to
the small size of the component devices of the array.
Fix this by requiring the hash spacing to be at least one which
implies that also "base" is non-zero.
This bug has existed since about 2.6.14.
Signed-off-by: Andre Noll <maan@systemlinux.org>
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 77ca7286d10b798e4907af941f29672bf484db77 upstream
cciss: new hardware support
Add support for 2 new SAS/SATA controllers.
Signed-off-by: Mike Miller <mike.miller@hp.com>
Cc: Jens Axboe <jens.axboe@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 404443081ce5e6f68b5f7eda16c959835ff200c0 upstream
Regression introduced by commit 6ae5ce8e8d4de666f31286808d2285aa6a50fa40
("cciss: remove redundant code").
This patch fixes a broken symlink in sysfs that was introduced by the
above commit. We broke it in 2.6.27-rc on or about 20080804. Some
installers are broken if this symlink does not exist and they may not
detect the logical drives configured on the controller. It does not
require being backported into 2.6.26.x or earlier kernels.
Signed-off-by: Mike Miller <mike.miller@hp.com>
Cc: Jens Axboe <jens.axboe@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 22bece00dc1f28dd3374c55e464c9f02eb642876 upstream
This regression was introduced by commit
6ae5ce8e8d4de666f31286808d2285aa6a50fa40 ("cciss: remove redundant code").
This patch fixes a regression where the controller firmware version is not
displayed in procfs. The previous patch would be called anytime something
changed. This will get called only once for each controller.
Signed-off-by: Mike Miller <mike.miller@hp.com>
Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Cc: Jens Axboe <jens.axboe@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 69d177c2fc702d402b17fdca2190d5a7e3ca55c5 upstream
When working with hugepages, hugetlbfs assumes that those hugepages are
smaller than MAX_ORDER. Specifically it assumes that the mem_map is
contigious and uses that to optimise access to the elements of the mem_map
that represent the hugepage. Gigantic pages (such as 16GB pages on
powerpc) by definition are of greater order than MAX_ORDER (larger than
MAX_ORDER_NR_PAGES in size). This means that we can no longer make use of
the buddy alloctor guarentees for the contiguity of the mem_map, which
ensures that the mem_map is at least contigious for maximmally aligned
areas of MAX_ORDER_NR_PAGES pages.
This patch adds new mem_map accessors and iterator helpers which handle
any discontiguity at MAX_ORDER_NR_PAGES boundaries. It then uses these to
implement gigantic page versions of copy_huge_page and clear_huge_page,
and to allow follow_hugetlb_page handle gigantic pages.
Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Cc: Jon Tollefson <kniht@linux.vnet.ibm.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Christoph Lameter <cl@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
commit 18229df5b613ed0732a766fc37850de2e7988e43 upstream
As we can determine exactly when a gigantic page is in use we can optimise
the common regular page cases by pulling out gigantic page initialisation
into its own function. As gigantic pages are never released to buddy we
do not need a destructor. This effectivly reverts the previous change to
the main buddy allocator. It also adds a paranoid check to ensure we
never release gigantic pages from hugetlbfs to the main buddy.
Signed-off-by: Andy Whitcroft <apw@shadowen.org>
Cc: Jon Tollefson <kniht@linux.vnet.ibm.com>
Cc: Mel Gorman <mel@csn.ul.ie>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Cc: Christoph Lameter <cl@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 24eb089950ce44603b30a3145a2c8520e2b55bb1 upstream
This fixes an oops when reading /proc/sched_debug.
A cgroup won't be removed completely until finishing cgroup_diput(), so we
shouldn't invalidate cgrp->dentry in cgroup_rmdir(). Otherwise, when a
group is being removed while cgroup_path() gets called, we may trigger
NULL dereference BUG.
The bug can be reproduced:
# cat test.sh
#!/bin/sh
mount -t cgroup -o cpu xxx /mnt
for (( ; ; ))
{
mkdir /mnt/sub
rmdir /mnt/sub
}
# ./test.sh &
# cat /proc/sched_debug
BUG: unable to handle kernel NULL pointer dereference at 00000038
IP: [<c045a47f>] cgroup_path+0x39/0x90
..
Call Trace:
[<c0420344>] ? print_cfs_rq+0x6e/0x75d
[<c0421160>] ? sched_debug_show+0x72d/0xc1e
..
Signed-off-by: Li Zefan <lizf@cn.fujitsu.com>
Acked-by: Paul Menage <menage@google.com>
Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
Cc: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit a8b71a2810386a5ac8f43d2095fe3355f0d8db37 upstream.
DMI tables need a blank NULL tail.
fixes the crash on Ingo's test box.
Signed-off-by: Yinghai Lu <yhlu.kernel@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 2216d199b1430d1c0affb1498a9ebdbd9c0de439 upstream
The bad_bios_dmi_table() quirk never triggered because we do DMI setup
too late. Move it a bit earlier.
Also change the CONFIG_X86_RESERVE_LOW_64K quirk to operate on the e820
table directly instead of messing with early reservations - this handles
overlaps (which do occur in this low range of RAM) more gracefully.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit fc38151947477596aa27df6c4306ad6008dc6711 upstream.
This bugzilla:
http://bugzilla.kernel.org/show_bug.cgi?id=11237
Documents a wide range of systems where the BIOS utilizes the first
64K of physical memory during suspend/resume and other hardware events.
Currently we reserve this memory on all AMI and Phoenix BIOS systems.
Life is too short to hunt subtle memory corruption problems like this,
so we try to be robust by default.
Still, allow this to be overriden: allow users who want that first 64K
of memory to be available to the kernel disable the quirk, via
CONFIG_X86_RESERVE_LOW_64K=n.
Also, allow the early reservation to overlap with other
early reservations.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 1e22436eba84edfec9c25e5a25d09062c4f91ca9 upstream
there's multiple reports about suspend/resume related low memory
corruption in this bugzilla:
http://bugzilla.kernel.org/show_bug.cgi?id=11237
the common pattern is that the corruption is caused by the BIOS,
and that it affects some portion of the first 64K of physical RAM.
So add a DMI quirk
This will waste 64K RAM on 'good' systems too, but without knowing
the exact nature of this BIOS memory corruption this is the safest
approach.
This might as well solve a wide range of suspend/resume breakages
under Linux.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 5649b7c30316a51792808422ac03ee825d26aa5e upstream
Alan Jenkins and Andy Wettstein reported a suspend/resume memory
corruption bug and extensively documented it here:
http://bugzilla.kernel.org/show_bug.cgi?id=11237
The bug is that the BIOS overwrites 1K of memory at 0xc000 physical,
without registering it in e820 as reserved or giving the kernel any
idea about this.
Detect AMI BIOSen and reserve that 1K.
We paint this bug around with a very broad brush (reserving that 1K on all
AMI BIOS systems), as the bug was extremely hard to find and needed several
weeks and lots of debugging and patching.
The bug was found via the CONFIG_X86_CHECK_BIOS_CORRUPTION=y debug feature,
if similar bugs are suspected then this feature can be enabled on other
systems as well to scan low memory for corrupted memory.
Reported-by: Alan Jenkins <alan-jenkins@tuffmail.co.uk>
Reported-by: Andy Wettstein <ajw1980@gmail.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit c87591b719737b4e91eb1a9fa8fd55a4ff1886d6 upstream
In ext3_sync_fs, we only wait for a commit to finish if we started it, but
there may be one already in progress which will not be synced.
In the case of a data=ordered umount with pending long symlinks which are
delayed due to a long list of other I/O on the backing block device, this
causes the buffer associated with the long symlinks to not be moved to the
inode dirty list in the second phase of fsync_super. Then, before they
can be dirtied again, kjournald exits, seeing the UMOUNT flag and the
dirty pages are never written to the backing block device, causing long
symlink corruption and exposing new or previously freed block data to
userspace.
This can be reproduced with a script created
by Eric Sandeen <sandeen@redhat.com>:
#!/bin/bash
umount /mnt/test2
mount /dev/sdb4 /mnt/test2
rm -f /mnt/test2/*
dd if=/dev/zero of=/mnt/test2/bigfile bs=1M count=512
touch
/mnt/test2/thisisveryveryveryveryveryveryveryveryveryveryveryveryveryveryveryverylongfilename
ln -s
/mnt/test2/thisisveryveryveryveryveryveryveryveryveryveryveryveryveryveryveryverylongfilename
/mnt/test2/link
umount /mnt/test2
mount /dev/sdb4 /mnt/test2
ls /mnt/test2/
umount /mnt/test2
To ensure all commits are synced, we flush all journal commits now when
sync_fs'ing ext3.
Signed-off-by: Arthur Jones <ajones@riverbed.com>
Cc: Eric Sandeen <sandeen@redhat.com>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: <linux-ext4@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
|
|
commit f8d570a4745835f2238a33b537218a1bb03fc671 and
3b53fbf4314594fa04544b02b2fc6e607912da18 upstream (because once wasn't
good enough...)
__scm_destroy() walks the list of file descriptors in the scm_fp_list
pointed to by the scm_cookie argument.
Those, in turn, can close sockets and invoke __scm_destroy() again.
There is nothing which limits how deeply this can occur.
The idea for how to fix this is from Linus. Basically, we do all of
the fput()s at the top level by collecting all of the scm_fp_list
objects hit by an fput(). Inside of the initial __scm_destroy() we
keep running the list until it is empty.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 3318a386e4ca68c76e0294363d29bdc46fcad670 upstream
While Linux doesn't honor setuid on scripts. However, it mistakenly
behaves differently for file capabilities.
This patch fixes that behavior by making sure that get_file_caps()
begins with empty bprm->caps_*. That way when a script is loaded,
its bprm->caps_* may be filled when binfmt_misc calls prepare_binprm(),
but they will be cleared again when binfmt_elf calls prepare_binprm()
next to read the interpreter's file capabilities.
Signed-off-by: Serge Hallyn <serue@us.ibm.com>
Acked-by: David Howells <dhowells@redhat.com>
Acked-by: Andrew G. Morgan <morgan@kernel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit ce39a800ea87c655de49af021c8b20ee323cb40d upstream.
A panic was discovered with bonding when using mode 5 or 6 and trying to
remove the slaves from the bond after the interface was taken down.
When calling 'ifconfig bond0 down' the following happens:
bond_close()
bond_alb_deinitialize()
tlb_deinitialize()
kfree(bond_info->tx_hashtbl)
bond_info->tx_hashtbl = NULL
Unfortunately if there are still slaves in the bond, when removing the
module the following happens:
bonding_exit()
bond_free_all()
bond_release_all()
bond_alb_deinit_slave()
tlb_clear_slave()
tx_hash_table = BOND_ALB_INFO(bond).tx_hashtbl
u32 next_index = tx_hash_table[index].next
As you might guess we panic when trying to access a few entries into the
table that no longer exists.
I experimented with several options (like moving the calls to
tlb_deinitialize somewhere else), but it really makes the most sense to
be part of the bond_close routine. It also didn't seem logical move
tlb_clear_slave around too much, so the simplest option seems to add a
check in tlb_clear_slave to make sure we haven't already wiped the
tx_hashtbl away before searching for all the non-existent hash-table
entries that used to point to the slave as the output interface.
Signed-off-by: Andy Gospodarek <andy@greyhouse.net>
Signed-off-by: Jay Vosburgh <fubar@us.ibm.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
While testing more corrupted images with hfsplus, i came across
one which triggered the following bug:
[15840.675016] BUG: unable to handle kernel paging request at fffffffb
[15840.675016] IP: [<c0116a4f>] kmap+0x15/0x56
[15840.675016] *pde = 00008067 *pte = 00000000
[15840.675016] Oops: 0000 [#1] PREEMPT DEBUG_PAGEALLOC
[15840.675016] Modules linked in:
[15840.675016]
[15840.675016] Pid: 11575, comm: ln Not tainted (2.6.27-rc4-00123-gd3ee1b4-dirty #29)
[15840.675016] EIP: 0060:[<c0116a4f>] EFLAGS: 00010202 CPU: 0
[15840.675016] EIP is at kmap+0x15/0x56
[15840.675016] EAX: 00000246 EBX: fffffffb ECX: 00000000 EDX: cab919c0
[15840.675016] ESI: 000007dd EDI: cab0bcf4 EBP: cab0bc98 ESP: cab0bc94
[15840.675016] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 0068
[15840.675016] Process ln (pid: 11575, ti=cab0b000 task=cab919c0 task.ti=cab0b000)
[15840.675016] Stack: 00000000 cab0bcdc c0231cfb 00000000 cab0bce0 00000800 ca9290c0 fffffffb
[15840.675016] cab145d0 cab919c0 cab15998 22222222 22222222 22222222 00000001 cab15960
[15840.675016] 000007dd cab0bcf4 cab0bd04 c022cb3a cab0bcf4 cab15a6c ca9290c0 00000000
[15840.675016] Call Trace:
[15840.675016] [<c0231cfb>] ? hfsplus_block_allocate+0x6f/0x2d3
[15840.675016] [<c022cb3a>] ? hfsplus_file_extend+0xc4/0x1db
[15840.675016] [<c022ce41>] ? hfsplus_get_block+0x8c/0x19d
[15840.675016] [<c06adde4>] ? sub_preempt_count+0x9d/0xab
[15840.675016] [<c019ece6>] ? __block_prepare_write+0x147/0x311
[15840.675016] [<c0161934>] ? __grab_cache_page+0x52/0x73
[15840.675016] [<c019ef4f>] ? block_write_begin+0x79/0xd5
[15840.675016] [<c022cdb5>] ? hfsplus_get_block+0x0/0x19d
[15840.675016] [<c019f22a>] ? cont_write_begin+0x27f/0x2af
[15840.675016] [<c022cdb5>] ? hfsplus_get_block+0x0/0x19d
[15840.675016] [<c0139ebe>] ? tick_program_event+0x28/0x4c
[15840.675016] [<c013bd35>] ? trace_hardirqs_off+0xb/0xd
[15840.675016] [<c022b723>] ? hfsplus_write_begin+0x2d/0x32
[15840.675016] [<c022cdb5>] ? hfsplus_get_block+0x0/0x19d
[15840.675016] [<c0161988>] ? pagecache_write_begin+0x33/0x107
[15840.675016] [<c01879e5>] ? __page_symlink+0x3c/0xae
[15840.675016] [<c019ad34>] ? __mark_inode_dirty+0x12f/0x137
[15840.675016] [<c0187a70>] ? page_symlink+0x19/0x1e
[15840.675016] [<c022e6eb>] ? hfsplus_symlink+0x41/0xa6
[15840.675016] [<c01886a9>] ? vfs_symlink+0x99/0x101
[15840.675016] [<c018a2f6>] ? sys_symlinkat+0x6b/0xad
[15840.675016] [<c018a348>] ? sys_symlink+0x10/0x12
[15840.675016] [<c01038bd>] ? sysenter_do_call+0x12/0x31
[15840.675016] =======================
[15840.675016] Code: 00 00 75 10 83 3d 88 2f ec c0 02 75 07 89 d0 e8 12 56 05 00 5d c3 55 ba 06 00 00 00 89 e5 53 89 c3 b8 3d eb 7e c0 e8 16 74 00 00 <8b> 03 c1 e8 1e 69 c0 d8 02 00 00 05 b8 69 8e c0 2b 80 c4 02 00
[15840.675016] EIP: [<c0116a4f>] kmap+0x15/0x56 SS:ESP 0068:cab0bc94
[15840.675016] ---[ end trace 4fea40dad6b70e5f ]---
This happens because the return value of read_mapping_page() is passed on
to kmap unchecked. The bug is triggered after the first
read_mapping_page() in hfsplus_block_allocate(), this patch fixes all
three usages in this functions but leaves the ones further down in the
file unchanged.
Signed-off-by: Eric Sesterhenn <snakebyte@gmx.de>
Cc: Roman Zippel <zippel@linux-m68k.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit efc7ffcb4237f8cb9938909041c4ed38f6e1bf40 upstream
When an hfsplus image gets corrupted it might happen that the catalog
namelength field gets b0rked. If we mount such an image the memcpy() in
hfsplus_cat_build_key_uni() writes more than the 255 that fit in the name
field. Depending on the size of the overwritten data, we either only get
memory corruption or also trigger an oops like this:
[ 221.628020] BUG: unable to handle kernel paging request at c82b0000
[ 221.629066] IP: [<c022d4b1>] hfsplus_find_cat+0x10d/0x151
[ 221.629066] *pde = 0ea29163 *pte = 082b0160
[ 221.629066] Oops: 0002 [#1] PREEMPT DEBUG_PAGEALLOC
[ 221.629066] Modules linked in:
[ 221.629066]
[ 221.629066] Pid: 4845, comm: mount Not tainted (2.6.27-rc4-00123-gd3ee1b4-dirty #28)
[ 221.629066] EIP: 0060:[<c022d4b1>] EFLAGS: 00010206 CPU: 0
[ 221.629066] EIP is at hfsplus_find_cat+0x10d/0x151
[ 221.629066] EAX: 00000029 EBX: 00016210 ECX: 000042c2 EDX: 00000002
[ 221.629066] ESI: c82d70ca EDI: c82b0000 EBP: c82d1bcc ESP: c82d199c
[ 221.629066] DS: 007b ES: 007b FS: 0000 GS: 0033 SS: 0068
[ 221.629066] Process mount (pid: 4845, ti=c82d1000 task=c8224060 task.ti=c82d1000)
[ 221.629066] Stack: c080b3c4 c82aa8f8 c82d19c2 00016210 c080b3be c82d1bd4 c82aa8f0 00000300
[ 221.629066] 01000000 750008b1 74006e00 74006900 65006c00 c82d6400 c013bd35 c8224060
[ 221.629066] 00000036 00000046 c82d19f0 00000082 c8224548 c8224060 00000036 c0d653cc
[ 221.629066] Call Trace:
[ 221.629066] [<c013bd35>] ? trace_hardirqs_off+0xb/0xd
[ 221.629066] [<c013bca3>] ? trace_hardirqs_off_caller+0x14/0x9b
[ 221.629066] [<c013bd35>] ? trace_hardirqs_off+0xb/0xd
[ 221.629066] [<c013bca3>] ? trace_hardirqs_off_caller+0x14/0x9b
[ 221.629066] [<c013bd35>] ? trace_hardirqs_off+0xb/0xd
[ 221.629066] [<c0107aa3>] ? native_sched_clock+0x82/0x96
[ 221.629066] [<c01302d2>] ? __kernel_text_address+0x1b/0x27
[ 221.629066] [<c010487a>] ? dump_trace+0xca/0xd6
[ 221.629066] [<c0109e32>] ? save_stack_address+0x0/0x2c
[ 221.629066] [<c0109eaf>] ? save_stack_trace+0x1c/0x3a
[ 221.629066] [<c013b571>] ? save_trace+0x37/0x8d
[ 221.629066] [<c013b62e>] ? add_lock_to_list+0x67/0x8d
[ 221.629066] [<c013ea1c>] ? validate_chain+0x8a4/0x9f4
[ 221.629066] [<c013553d>] ? down+0xc/0x2f
[ 221.629066] [<c013f1f6>] ? __lock_acquire+0x68a/0x6e0
[ 221.629066] [<c013bd35>] ? trace_hardirqs_off+0xb/0xd
[ 221.629066] [<c013bca3>] ? trace_hardirqs_off_caller+0x14/0x9b
[ 221.629066] [<c013bd35>] ? trace_hardirqs_off+0xb/0xd
[ 221.629066] [<c0107aa3>] ? native_sched_clock+0x82/0x96
[ 221.629066] [<c013da5d>] ? mark_held_locks+0x43/0x5a
[ 221.629066] [<c013dc3a>] ? trace_hardirqs_on+0xb/0xd
[ 221.629066] [<c013dbf4>] ? trace_hardirqs_on_caller+0xf4/0x12f
[ 221.629066] [<c06abec8>] ? _spin_unlock_irqrestore+0x42/0x58
[ 221.629066] [<c013555c>] ? down+0x2b/0x2f
[ 221.629066] [<c022aa68>] ? hfsplus_iget+0xa0/0x154
[ 221.629066] [<c022b0b9>] ? hfsplus_fill_super+0x280/0x447
[ 221.629066] [<c0107aa3>] ? native_sched_clock+0x82/0x96
[ 221.629066] [<c013bca3>] ? trace_hardirqs_off_caller+0x14/0x9b
[ 221.629066] [<c013bca3>] ? trace_hardirqs_off_caller+0x14/0x9b
[ 221.629066] [<c013f1f6>] ? __lock_acquire+0x68a/0x6e0
[ 221.629066] [<c041c9e4>] ? string+0x2b/0x74
[ 221.629066] [<c041cd16>] ? vsnprintf+0x2e9/0x512
[ 221.629066] [<c010487a>] ? dump_trace+0xca/0xd6
[ 221.629066] [<c0109eaf>] ? save_stack_trace+0x1c/0x3a
[ 221.629066] [<c0109eaf>] ? save_stack_trace+0x1c/0x3a
[ 221.629066] [<c013b571>] ? save_trace+0x37/0x8d
[ 221.629066] [<c013b62e>] ? add_lock_to_list+0x67/0x8d
[ 221.629066] [<c013ea1c>] ? validate_chain+0x8a4/0x9f4
[ 221.629066] [<c01354d3>] ? up+0xc/0x2f
[ 221.629066] [<c013f1f6>] ? __lock_acquire+0x68a/0x6e0
[ 221.629066] [<c013bd35>] ? trace_hardirqs_off+0xb/0xd
[ 221.629066] [<c013bca3>] ? trace_hardirqs_off_caller+0x14/0x9b
[ 221.629066] [<c013bd35>] ? trace_hardirqs_off+0xb/0xd
[ 221.629066] [<c0107aa3>] ? native_sched_clock+0x82/0x96
[ 221.629066] [<c041cfb7>] ? snprintf+0x1b/0x1d
[ 221.629066] [<c01ba466>] ? disk_name+0x25/0x67
[ 221.629066] [<c0183960>] ? get_sb_bdev+0xcd/0x10b
[ 221.629066] [<c016ad92>] ? kstrdup+0x2a/0x4c
[ 221.629066] [<c022a7b3>] ? hfsplus_get_sb+0x13/0x15
[ 221.629066] [<c022ae39>] ? hfsplus_fill_super+0x0/0x447
[ 221.629066] [<c0183583>] ? vfs_kern_mount+0x3b/0x76
[ 221.629066] [<c0183602>] ? do_kern_mount+0x32/0xba
[ 221.629066] [<c01960d4>] ? do_new_mount+0x46/0x74
[ 221.629066] [<c0196277>] ? do_mount+0x175/0x193
[ 221.629066] [<c013dbf4>] ? trace_hardirqs_on_caller+0xf4/0x12f
[ 221.629066] [<c01663b2>] ? __get_free_pages+0x1e/0x24
[ 221.629066] [<c06ac07b>] ? lock_kernel+0x19/0x8c
[ 221.629066] [<c01962e6>] ? sys_mount+0x51/0x9b
[ 221.629066] [<c01962f9>] ? sys_mount+0x64/0x9b
[ 221.629066] [<c01038bd>] ? sysenter_do_call+0x12/0x31
[ 221.629066] =======================
[ 221.629066] Code: 89 c2 c1 e2 08 c1 e8 08 09 c2 8b 85 e8 fd ff ff 66 89 50 06 89 c7 53 83 c7 08 56 57 68 c4 b3 80 c0 e8 8c 5c ef ff 89 d9 c1 e9 02 <f3> a5 89 d9 83 e1 03 74 02 f3 a4 83 c3 06 8b 95 e8 fd ff ff 0f
[ 221.629066] EIP: [<c022d4b1>] hfsplus_find_cat+0x10d/0x151 SS:ESP 0068:c82d199c
[ 221.629066] ---[ end trace e417a1d67f0d0066 ]---
Since hfsplus_cat_build_key_uni() returns void and only has one callsite,
the check is performed at the callsite.
Signed-off-by: Eric Sesterhenn <snakebyte@gmx.de>
Reviewed-by: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Roman Zippel <zippel@linux-m68k.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 61579ba83934d397a4fa2bb7372de9ae112587d5 upstream.
Input: atkbd - expand Latitude's force release quirk to other Dells
Dell laptops fail to send key up events for several of their special
keys. There's an existing quirk in the kernel to handle this, but it's
limited to the Latitude range. This patch extends it to cover all
portable Dells.
Signed-off-by: Matthew Garrett <mjg@redhat.com>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Cc: Chuck Ebbert <cebbert@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit a68823ee5285e65b51ceb96f8b13a5b4f99a6888 upstream.
ACPI: Clear WAK_STS on resume
The leading other brand OS appears to clear the WAK_STS flag on resume.
When rebooted, certain BIOSes assume that the system is actually
resuming if it's still set and so fail to reboot correctly. Make sure
that it's cleared at resume time.
Comment clarified as suggested by Bob Moore
http://bugzilla.kernel.org/show_bug.cgi?id=11634
Signed-off-by: Matthew Garrett <mjg@redhat.com>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Tested-by: Christian Borntraeger <borntraeger@de.ibm.com>
Tested-by: Romano Giannetti <romano.giannetti@gmail.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Cc: Chuck Ebbert <cebbert@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 8fd145917fb62368a9b80db59562c20576238f5a upstream
ACPI: Ingore the RESET_REG_SUP bit when using ACPI reset mechanism
According to ACPI 3.0, FADT.flags.RESET_REG_SUP indicates
whether the ACPI reboot mechanism is supported.
However, some boxes have this bit clear, have a valid
ACPI_RESET_REG & RESET_VALUE, and ACPI reboot is the only
mechanism that works for them after S3.
This suggests that other operating systems may not be checking
the RESET_REG_SUP bit, and are using other means to decide
whether to use the ACPI reboot mechanism or not.
Here we stop checking RESET_REG_SUP.
Instead, When acpi reboot is requested,
only the reset_register is checked. If the following
conditions are met, it indicates that the reset register is supported.
a. reset_register is not zero
b. the access width is eight
c. the bit_offset is zero
http://bugzilla.kernel.org/show_bug.cgi?id=7299
http://bugzilla.kernel.org/show_bug.cgi?id=1148
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Cc: Chuck Ebbert <cebbert@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 054e5f616b5becdc096b793407dc33fe379749ac upstream
libata: Fix LBA48 on pata_it821x RAID volumes.
[http://lkml.org/lkml/2008/10/18/82]
Signed-off-by: Ondrej Zary <linux@rainbow-software.org>
Acked-by: Alan Cox <alan@redhat.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Cc: Chuck Ebbert <cebbert@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit c0ff17720ec5f42205b3d2ca03a18da0a8272976 upstream.
Signed-off-by: Alexey Starikovskiy <astarikovskiy@suse.de>
Tested-by: Alan Jenkins <alan-jenkins@tuffmail.co.uk>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 8463200a00fe2aea938b40173198a0983f2929ef upstream
(needed by the next patch)
No functional changes.
Signed-off-by: Alexey Starikovskiy <astarikovskiy@suse.de>
Acked-by: Rafael J. Wysocki <rjw@suse.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 7c6db4e050601f359081fde418ca6dc4fc2d0011 upstream.
It is easier and faster to do transaction directly from interrupt context
rather than waking control thread.
Also, cleaner GPE storm avoidance is implemented.
References: http://bugzilla.kernel.org/show_bug.cgi?id=9998
http://bugzilla.kernel.org/show_bug.cgi?id=10724
http://bugzilla.kernel.org/show_bug.cgi?id=10919
http://bugzilla.kernel.org/show_bug.cgi?id=11309
http://bugzilla.kernel.org/show_bug.cgi?id=11549
Signed-off-by: Alexey Starikovskiy <astarikovskiy@suse.de>
Tested-by: Sitsofe Wheeler <sitsofe@yahoo.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Cc: Rafael J. Wysocki <rjw@sisk.pl>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 3c324283e6cdb79210cf7975c3e40d3ba3e672b2 upstream
All three flavors of sata_nv's are different in how their hardreset
behaves.
* generic: Hardreset is not reliable. Link often doesn't come online
after hardreset.
* nf2/3: A little bit better - link comes online with longer debounce
timing. However, nf2/3 can't reliable wait for the first D2H
Register FIS, so it can't wait for device readiness or classify the
device after hardreset. Follow-up SRST required.
* ck804: Hardreset finally works.
The core layer change to prefer hardreset and follow up changes
exposed the above issues and caused various detection regressions for
all three flavors. This patch, hopefully, fixes all the known issues
and should make sata_nv error handling more reliable.
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit cadef677e4a9b9c1d069675043767df486782986 upstream
Promise ATA engines need to be reset when errors occur.
That's currently done for errors detected by sata_promise itself,
but it's not done for errors like timeouts detected outside of
the low-level driver.
The effect of this omission is that a timeout tends to result
in a sequence of failed COMRESETs after which libata EH gives
up and disables the port. At that point the port's ATA engine
hangs and even reloading the driver will not resume it.
To fix this, make sata_promise override ->hardreset on SATA
ports with code which calls pdc_reset_port() on the port in
question before calling libata's hardreset. PATA ports don't
use ->hardreset, so for those we override ->softreset instead.
Signed-off-by: Mikael Pettersson <mikpe@it.uu.se>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 758a7f7bb86b520aadc484f23da85e547b3bf3d8 upstream
x86: register a platform RTC device if PNP doesn't describe it
Most if not all x86 platforms have an RTC device, but sometimes the RTC
is not exposed as a PNP0b00/PNP0b01/PNP0b02 device in PNPBIOS or ACPI:
http://bugzilla.kernel.org/show_bug.cgi?id=11580
https://bugzilla.redhat.com/show_bug.cgi?id=451188
It's best if we can discover the RTC via PNP because then we know
which flavor of device it is, where it lives, and which IRQ it uses.
But if we can't, we should register a platform device using the
compiled-in RTC_PORT/RTC_IRQ resource assumptions.
Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com>
Acked-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: David Brownell <dbrownell@users.sourceforge.net>
Reported-by: Rik Theys <rik.theys@esat.kuleuven.be>
Reported-by: shr_msn@yahoo.com.tw
Cc: Chuck Ebbert <cebbert@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
bridge
commit 3030ca4cf4abbdd2dd850a14d20e9fca5937ffb5 upstream
USB: storage: Avoid I/O errors when issuing SCSI ioctls to JMicron USB/ATA bridge
Here's the patch that implements the fix you suggested to avoid the
I/O errors that I was running into with my new USB enclosure with a
JMicron USB/ATA bridge, while issuing scsi-io USN or other such
queries used by Fedora's mkinitrd.
http://bugzilla.kernel.org/show_bug.cgi?id=9638#c85
/proc/bus/usb/devices:
T: Bus=01 Lev=01 Prnt=01 Port=07 Cnt=04 Dev#= 5 Spd=480 MxCh= 0
D: Ver= 2.00 Cls=00(>ifc ) Sub=00 Prot=00 MxPS=64 #Cfgs= 1
P: Vendor=152d ProdID=2329 Rev= 1.00
S: Manufacturer=JMicron
S: Product=USB to ATA/ATAPI Bridge
S: SerialNumber=DE5088854FFF
C:* #Ifs= 1 Cfg#= 1 Atr=c0 MxPwr= 2mA
I:* If#= 0 Alt= 0 #EPs= 2 Cls=08(stor.) Sub=06 Prot=50 Driver=usb-storage
E: Ad=81(I) Atr=02(Bulk) MxPS= 512 Ivl=0ms
E: Ad=02(O) Atr=02(Bulk) MxPS= 512 Ivl=0ms
(patch applied and retested on a modified 2.6.27.2-libre.24.rc1.fc10)
Signed-off-by: Phil Dibowitz <phil@ipom.com>
Cc: Alexandre Oliva <oliva@lsd.ic.unicamp.br>
Cc: Chuck Ebbert <cebbert@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 72f22b1eb6ca5e4676a632a04d40d46cb61d4562 upstream
rtc-cmos: look for PNP RTC first, then for platform RTC
We shouldn't rely on "pnp_platform_devices" to tell us whether there
is a PNP RTC device.
I introduced "pnp_platform_devices", but I think it was a mistake.
All it tells us is whether we found any PNPBIOS or PNPACPI devices.
Many machines have some PNP devices, but do not describe the RTC
via PNP. On those machines, we need to do the platform driver probe
to find the RTC.
We should just register the PNP driver and see whether it claims anything.
If we don't find a PNP RTC, fall back to the platform driver probe.
This (in conjunction with the arch/x86/kernel/rtc.c patch to add
a platform RTC device when PNP doesn't have one) should resolve
these issues:
http://bugzilla.kernel.org/show_bug.cgi?id=11580
https://bugzilla.redhat.com/show_bug.cgi?id=451188
Signed-off-by: Bjorn Helgaas <bjorn.helgaas@hp.com>
Acked-by: Rafael J. Wysocki <rjw@sisk.pl>
Acked-by: David Brownell <dbrownell@users.sourceforge.net>
Reported-by: Rik Theys <rik.theys@esat.kuleuven.be>
Reported-by: shr_msn@yahoo.com.tw
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Chuck Ebbert <cebbert@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit e1e23bb0513520035ec934fa3483507cb6648b7c upstream
x86: avoid dereferencing beyond stack + THREAD_SIZE
It's possible for get_wchan() to dereference past task->stack + THREAD_SIZE
while iterating through instruction pointers if fp equals the upper boundary,
causing a kernel panic.
Signed-off-by: David Rientjes <rientjes@google.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Cc: Chuck Ebbert <cebbert@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 5b7dba4ff834259a5623e03a565748704a8fe449 upstream
sched_clock: prevent scd->clock from moving backwards
When sched_clock_cpu() couples the clocks between two cpus, it may
increment scd->clock beyond the GTOD tick window that __update_sched_clock()
uses to clamp the clock. A later call to __update_sched_clock() may move
the clock back to scd->tick_gtod + TICK_NSEC, violating the clock's
monotonic property.
This patch ensures that scd->clock will not be set backward.
Signed-off-by: Dave Kleikamp <shaggy@linux.vnet.ibm.com>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Cc: Chuck Ebbert <cebbert@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 0c4b83da58ec2e96ce9c44c211d6eac5f9dae478 upstream
sched: disable the hrtick for now
David Miller reported that hrtick update overhead has tripled the
wakeup overhead on Sparc64.
That is too much - disable the HRTICK feature for now by default,
until a faster implementation is found.
Reported-by: David Miller <davem@davemloft.net>
Acked-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Cc: Chuck Ebbert <cebbert@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit e354597cce8d219d135d65e585dc4f30323486b9 upstream.
Since patch 6ac665c63dcac8fcec534a1d224ecbb8b867ad59 my infiniband
controller hasn't worked. This is because it has 64-bit prefetchable
memory, which was mistakenly being taken to be 32-bit memory. The
resource flags in this case are PCI_BASE_ADDRESS_MEM_TYPE_64 |
PCI_BASE_ADDRESS_MEM_PREFETCH.
This patch checks only for the PCI_BASE_ADDRESS_MEM_TYPE_64 bit; thus
whether the region is prefetchable or not is ignored. This fixes my
Infiniband.
Reviewed-by: Matthew Wilcox <matthew@wil.cx>
Signed-off-by: Peter Chubb <peterc@gelato.unsw.edu.au>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Cc: Bjorn Helgaas <bjorn.helgaas@hp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
cherry picked from commit 11fc9a4a440112b5afc1a99d86ba92d70205a688
DVB: s5h1411: Power down s5h1411 when not in use
Power down the s5h1411 demodulator when not in use
(on the Pinnacle 801e, this brings idle power from
123ma down to 84ma).
Signed-off-by: Devin Heitmueller <devin.heitmueller@gmail.com>
Acked-by: Steven Toth <stoth@linuxtv.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Signed-off-by: Michael Krufky <mkrufky@linuxtv.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
cherry picked from commit f0d041e50bc6c8a677922d72b010f80af9b23b18
DVB: s5h1411: Perform s5h1411 soft reset after tuning
If you instruct the tuner to change frequencies, it can take up to 2500ms to
get a demod lock. By performing a soft reset after the tuning call (which
is consistent with how the Pinnacle 801e Windows driver behaves), you get
a demod lock inside of 300ms
Signed-off-by: Devin Heitmueller <devin.heitmueller@gmail.com>
Signed-off-by: Michael Krufky <mkrufky@linuxtv.org>
Acked-by: Steven Toth <stoth@linuxtv.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
cherry picked from commit 1af46b450fa49c57d73764d66f267335ccd807e2
DVB: s5h1411: bugfix: Setting serial or parallel mode could destroy bits
Adding a serialmode function to read/and/or/write the register for safety.
Signed-off-by: Steven Toth <stoth@linuxtv.org>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Signed-off-by: Michael Krufky <mkrufky@linuxtv.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
cherry picked from commit 3f93d1adca658201c64251c43a147cc79d468c3f
V4L: pvrusb2: Keep MPEG PTSs from drifting away
This change was empirically figured out by Boris Dores after
empirically comparing against behavior in the Windows driver.
Signed-off-by: Mike Isely <isely@pobox.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Signed-off-by: Michael Krufky <mkrufky@linuxtv.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit d8009882e9f5e1a76986c741f071edd2ad760c97 upstream
The lock used in snd_ctl_dev_disconnect() should be card->ctl_files_rwlock
for protection of card->ctl_files entries, instead of card->controls_rwsem.
Reported-by: Vegard Nossum <vegard.nossum@gmail.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Jaroslav Kysela <perex@perex.cz>
Cc: Chris Wedgwood <cw@f00f.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 4a029abee0f1d69cb0445657d6fa5a38597bd17d upstream
The scx200_i2c driver is missing the .class parameter, which means no
i2c drivers are willing to probe for devices on the bus and attach to
them.
Signed-off-by: Len Sorensen <lsorense@csclub.uwaterloo.ca>
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 846557d3ceb6c7493e090921db5d6158ec237228 upstream
Replace all references to the old i2c mailing list.
This isn't a bug fix, but I would hate to miss bug reports because
they're sent to a dead mailing list.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 4792adbac9eb41cea77a45ab76258ea10d411173 upstream
If mem= is used on the boot command line to limit memory then the memory block where a 16G page resides may not be available.
Thanks to Michael Ellerman for finding the problem.
Signed-off-by: Jon Tollefson <kniht@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit e81703724a966120ace6504c993bda9e084cbf3e upstream.
Adjust amount to reserve based on previous nodes for reserves spanning
multiple nodes. Check if the node active range is empty before attempting
to pass the reserve to bootmem. In practice the range shouldn't be empty,
but to be sure we check.
Signed-off-by: Jon Tollefson <kniht@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 8f64e1f2d1e09267ac926e15090fd505c1c0cbcb upstream
If there are multiple reserved memory blocks via lmb_reserve() that are
contiguous addresses and on different NUMA nodes we are losing track of which
address ranges to reserve in bootmem on which node. I discovered this
when I recently got to try 16GB huge pages on a system with more then 2 nodes.
When scanning the device tree in early boot we call lmb_reserve() with
the addresses of the 16G pages that we find so that the memory doesn't
get used for something else. For example the addresses for the pages
could be 4000000000, 4400000000, 4800000000, 4C00000000, etc - 8 pages,
one on each of eight nodes. In the lmb after all the pages have been
reserved it will look something like the following:
lmb_dump_all:
memory.cnt = 0x2
memory.size = 0x3e80000000
memory.region[0x0].base = 0x0
.size = 0x1e80000000
memory.region[0x1].base = 0x4000000000
.size = 0x2000000000
reserved.cnt = 0x5
reserved.size = 0x3e80000000
reserved.region[0x0].base = 0x0
.size = 0x7b5000
reserved.region[0x1].base = 0x2a00000
.size = 0x78c000
reserved.region[0x2].base = 0x328c000
.size = 0x43000
reserved.region[0x3].base = 0xf4e8000
.size = 0xb18000
reserved.region[0x4].base = 0x4000000000
.size = 0x2000000000
The reserved.region[0x4] contains the 16G pages. In
arch/powerpc/mm/num.c: do_init_bootmem() we loop through each of the
node numbers looking for the reserved regions that belong to the
particular node. It is not able to identify region 0x4 as being a part
of each of the 8 nodes. It is assuming that a reserved region is only
on a single node.
This patch takes out the reserved region loop from inside
the loop that goes over each node. It looks up the active region containing
the start of the reserved region. If it extends past that active region then
it adjusts the size and gets the next active region containing it.
Signed-off-by: Jon Tollefson <kniht@linux.vnet.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 22e181ba7f09197dd6f35a48013cb86289644eb6 upstream.
The i2c bus defn is broken on linkstation / kurobox machines since at
least 2.6.27. Fix it. Also remove CONFIG_SERIAL_OF_PLATFORM, which, if
enabled, breaks the serial console after the
"console handover: boot [udbg0] -> real [ttyS1]" message.
Signed-off-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de>
Signed-off-by: Kumar Gala <galak@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
upstream commit df316e939100e789b3c5d4d102619ccf5834bd00
Currently not always an EV_SYN event is reported to userland
after the EV_SW SW_LID event has been sent. This is easy to verify
by using “input-events” from input-utils and just closing and opening
the lid.
Signed-off-by: Guillem Jover <guillem.jover@nokia.com>
Acked-by: Dmitry Torokhov <dtor@mail.ru>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
Same as commit cd1f70fdb4823c97328a1f151f328eb36fafd579 upstream
1: There is a small race between queue_delayed_work() and its
corresponding kref |