<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/arch/sh/kernel/vmlinux.lds.S, branch v3.0.82</title>
<subtitle>Linux kernel source tree</subtitle>
<id>https://git.amat.us/linux/atom/arch/sh/kernel/vmlinux.lds.S?h=v3.0.82</id>
<link rel='self' href='https://git.amat.us/linux/atom/arch/sh/kernel/vmlinux.lds.S?h=v3.0.82'/>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/'/>
<updated>2011-03-24T17:50:09Z</updated>
<entry>
<title>percpu: Always align percpu output section to PAGE_SIZE</title>
<updated>2011-03-24T17:50:09Z</updated>
<author>
<name>Tejun Heo</name>
<email>tj@kernel.org</email>
</author>
<published>2011-03-24T17:50:09Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=0415b00d175e0d8945e6785aad21b5f157976ce0'/>
<id>urn:sha1:0415b00d175e0d8945e6785aad21b5f157976ce0</id>
<content type='text'>
Percpu allocator honors alignment request upto PAGE_SIZE and both the
percpu addresses in the percpu address space and the translated kernel
addresses should be aligned accordingly.  The calculation of the
former depends on the alignment of percpu output section in the kernel
image.

The linker script macros PERCPU_VADDR() and PERCPU() are used to
define this output section and the latter takes @align parameter.
Several architectures are using @align smaller than PAGE_SIZE breaking
percpu memory alignment.

This patch removes @align parameter from PERCPU(), renames it to
PERCPU_SECTION() and makes it always align to PAGE_SIZE.  While at it,
add PCPU_SETUP_BUG_ON() checks such that alignment problems are
reliably detected and remove percpu alignment comment recently added
in workqueue.c as the condition would trigger BUG way before reaching
there.

For um, this patch raises the alignment of percpu area.  As the area
is in .init, there shouldn't be any noticeable difference.

This problem was discovered by David Howells while debugging boot
failure on mn10300.

Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Acked-by: Mike Frysinger &lt;vapier@gentoo.org&gt;
Cc: uclinux-dist-devel@blackfin.uclinux.org
Cc: David Howells &lt;dhowells@redhat.com&gt;
Cc: Jeff Dike &lt;jdike@addtoit.com&gt;
Cc: user-mode-linux-devel@lists.sourceforge.net
</content>
</entry>
<entry>
<title>percpu: align percpu readmostly subsection to cacheline</title>
<updated>2011-01-25T13:26:50Z</updated>
<author>
<name>Tejun Heo</name>
<email>tj@kernel.org</email>
</author>
<published>2011-01-25T13:26:50Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=19df0c2fef010e94e90df514aaf4e73f6b80145c'/>
<id>urn:sha1:19df0c2fef010e94e90df514aaf4e73f6b80145c</id>
<content type='text'>
Currently percpu readmostly subsection may share cachelines with other
percpu subsections which may result in unnecessary cacheline bounce
and performance degradation.

This patch adds @cacheline parameter to PERCPU() and PERCPU_VADDR()
linker macros, makes each arch linker scripts specify its cacheline
size and use it to align percpu subsections.

This is based on Shaohua's x86 only patch.

Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Cc: Shaohua Li &lt;shaohua.li@intel.com&gt;
</content>
</entry>
<entry>
<title>sh: Kill off some superfluous legacy PMB special casing.</title>
<updated>2010-02-16T12:43:38Z</updated>
<author>
<name>Paul Mundt</name>
<email>lethal@linux-sh.org</email>
</author>
<published>2010-02-16T12:43:38Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=1d5cfcdff793e2f34ec61d902fa5ee0c7e4a2208'/>
<id>urn:sha1:1d5cfcdff793e2f34ec61d902fa5ee0c7e4a2208</id>
<content type='text'>
The __va()/__pa() offsets and the boot memory offsets are consistent for
all PMB users, so there is no need to special case these for legacy PMB.
Kill the special casing off and depend on CONFIG_PMB across the board.
This also fixes up yet another addressing bug for sh64.

Signed-off-by: Paul Mundt &lt;lethal@linux-sh.org&gt;
</content>
</entry>
<entry>
<title>sh: Fix up legacy PMB mode offset calculation.</title>
<updated>2010-02-15T07:10:57Z</updated>
<author>
<name>Paul Mundt</name>
<email>lethal@linux-sh.org</email>
</author>
<published>2010-02-15T07:10:57Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=04c869735541c27dd137c55f35f8a18bb372bbe1'/>
<id>urn:sha1:04c869735541c27dd137c55f35f8a18bb372bbe1</id>
<content type='text'>
The change for fixing up sh64 inadvertently inverted the logic for legacy
PMB, fix that back up.

Signed-off-by: Paul Mundt &lt;lethal@linux-sh.org&gt;
</content>
</entry>
<entry>
<title>sh64: fix up memory offset calculation.</title>
<updated>2010-02-12T06:41:45Z</updated>
<author>
<name>Paul Mundt</name>
<email>lethal@linux-sh.org</email>
</author>
<published>2010-02-12T06:41:45Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=19f6b8b44e3f633d5d7d1ed68848b1eb89a1e800'/>
<id>urn:sha1:19f6b8b44e3f633d5d7d1ed68848b1eb89a1e800</id>
<content type='text'>
The linker script offsets were broken by the recent 29/32-bit
integration, so this fixes it up for sh64.

Signed-off-by: Paul Mundt &lt;lethal@linux-sh.org&gt;
</content>
</entry>
<entry>
<title>sh: kmemleak support.</title>
<updated>2010-01-27T13:03:11Z</updated>
<author>
<name>Chris Smith</name>
<email>chris.smith@st.com</email>
</author>
<published>2010-01-27T13:03:11Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=660e2acad81c19b404f7d7d06e57a6d5e6ce7426'/>
<id>urn:sha1:660e2acad81c19b404f7d7d06e57a6d5e6ce7426</id>
<content type='text'>
Enables support for kmemleak on sh.

Signed-off-by: Chris Smith &lt;chris.smith@st.com&gt;
Signed-off-by: Paul Mundt &lt;lethal@linux-sh.org&gt;
</content>
</entry>
<entry>
<title>sh: Kill off the special uncached section and fixmap.</title>
<updated>2010-01-21T07:05:25Z</updated>
<author>
<name>Paul Mundt</name>
<email>lethal@linux-sh.org</email>
</author>
<published>2010-01-21T07:05:25Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=2dc2f8e0c46864e2a3722c84eaa96513d4cf8b2f'/>
<id>urn:sha1:2dc2f8e0c46864e2a3722c84eaa96513d4cf8b2f</id>
<content type='text'>
Now that cached_to_uncached works as advertized in 32-bit mode and we're
never going to be able to map &lt; 16MB anyways, there's no need for the
special uncached section. Kill it off.

Signed-off-by: Paul Mundt &lt;lethal@linux-sh.org&gt;
</content>
</entry>
<entry>
<title>sh: fixed PMB mode refactoring.</title>
<updated>2010-01-13T09:31:48Z</updated>
<author>
<name>Paul Mundt</name>
<email>lethal@linux-sh.org</email>
</author>
<published>2010-01-13T09:31:48Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=a0ab36689a36e583b6e736f1c99ac8c9aebdad59'/>
<id>urn:sha1:a0ab36689a36e583b6e736f1c99ac8c9aebdad59</id>
<content type='text'>
This introduces some much overdue chainsawing of the fixed PMB support.
fixed PMB was introduced initially to work around the fact that dynamic
PMB mode was relatively broken, though they were never intended to
converge. The main areas where there are differences are whether the
system is booted in 29-bit mode or 32-bit mode, and whether legacy
mappings are to be preserved. Any system booting in true 32-bit mode will
not care about legacy mappings, so these are roughly decoupled.

Regardless of the entry point, PMB and 32BIT are directly related as far
as the kernel is concerned, so we also switch back to having one select
the other.

With legacy mappings iterated through and applied in the initialization
path it's now possible to finally merge the two implementations and
permit dynamic remapping overtop of remaining entries regardless of
whether boot mappings are crafted by hand or inherited from the boot
loader.

Signed-off-by: Paul Mundt &lt;lethal@linux-sh.org&gt;
</content>
</entry>
<entry>
<title>Merge branch 'master' of git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6</title>
<updated>2009-09-16T04:48:32Z</updated>
<author>
<name>Paul Mundt</name>
<email>lethal@linux-sh.org</email>
</author>
<published>2009-09-16T04:48:32Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=ea88023b3491a384575ebcd5e8a449e841a28a24'/>
<id>urn:sha1:ea88023b3491a384575ebcd5e8a449e841a28a24</id>
<content type='text'>
Conflicts:
	arch/sh/kernel/vmlinux.lds.S
</content>
</entry>
<entry>
<title>sh: dwarf unwinder support.</title>
<updated>2009-08-13T16:58:43Z</updated>
<author>
<name>Matt Fleming</name>
<email>matt@console-pimps.org</email>
</author>
<published>2009-08-13T16:58:43Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=bd353861c735b2265c9d8b2559960c693e7c68ab'/>
<id>urn:sha1:bd353861c735b2265c9d8b2559960c693e7c68ab</id>
<content type='text'>
This is a first cut at a generic DWARF unwinder for the kernel. It's
still lacking DWARF64 support and the DWARF expression support hasn't
been tested very well but it is generating proper stacktraces on SH for
WARN_ON() and NULL dereferences.

Signed-off-by: Matt Fleming &lt;matt@console-pimps.org&gt;
Signed-off-by: Paul Mundt &lt;lethal@linux-sh.org&gt;
</content>
</entry>
</feed>
